Brain Pickings is one of my favorite blogs. Maria Popova (the author) has written a myriad of literary essays covering topics from science, all the way to children’s books. Every post is informative and written with style.
One missing feature in her website is the ability to quickly view all article titles without having to scroll through the content. For this reason I have written a small script that prints a list of article titles and their url for a certain number of pages.
Technologies used:
Python
BeautifulSoup
Understanding the Script
Start by importing the necessary libraries.
1 2 3 4
from requests import get from requests.exceptions import RequestException from contextlib import closing from bs4 import BeautifulSoup
defsimple_get(url): """ Attempts to get the content at `url` by making an HTTP GET request. If the content-type of response is some kind of HTML/XML, return the text content, otherwise return None. """ try: with closing(get(url, stream=True)) as resp: if is_good_response(resp): return resp.content else: returnNone
except RequestException as e: log_error('Error during requests to {0} : {1}'.format(url, str(e))) returnNone
defis_good_response(resp): """ Returns True if the response seems to be HTML, False otherwise. """ content_type = resp.headers['Content-Type'].lower() return (resp.status_code == 200 and content_type isnotNone and content_type.find('html') > -1) deflog_error(e): """ It is always a good idea to log errors. This function just prints them, but you can make it do anything. """ print(e)
Specify the number of pages to iterate through, and print the results in the console.