Use Python to extract URLs to HTML-format SEC filings on EDGAR

I wrote two posts to describe how to download TXT-format SEC filings on EDGAR:

Although TXT-format files have benefits of easy further handling, they are oftentimes not well formatted and thus hard to read. A HTML-format 10-K is more pleasing to eyes. Actually, SEC also provides the paths (namely, URLs) to HTML-format filings. With the path, we can open a HTML-format filing in a web browser, or further download the filing as a PDF.

There remain two parts in the Python code. In the first part, we need download the path data. Instead of using master.idx in the above two posts, we need use crawler.idx for this task. The path we get will be a URL like this:

https://www.sec.gov/Archives/edgar/data/859747/0001477932-16-007969-index.htm

Note that the path we get is a URL to an index page, not a URL to the HTML-format 10-Q in this example. To get the direct URL to the HTML-format 10-Q, we have to go one-level deeper. The second part of the Python code is used to go that deeper and extract the direct URL to the main body of the Form (the URL embedded in the first row in more than 99% cases). The code also extracts such information as filing date and period of report on the index page. The code writes the output (including filing date, period of report and direct URL) in log.csv. The following is an output example—the first URL is the path we get in the first part of the code; the second URL is the direct URL to the HTML-format Form.

The first part of the code:

The first part of the code generates a dataset of the complete path information of SEC filings for the selected period (in both SQLite and Stata). Then, you can select a sample based on firm, form type, filing date, etc. and feed a CSV file to the second part of the code. The feeding CSV should look like this:

The second part of the code:

Note:

  1. Please use Python 3.x.
  2. Please install all required modules such as selenium. Google related documentation if you do not know how to install them.
  3. The second part of the code only output the direct URL to the HTML-format filing. If you want to save it as a PDF, you need write additional Python code on your own.
This entry was posted in Python. Bookmark the permalink.

7 Responses to Use Python to extract URLs to HTML-format SEC filings on EDGAR

  1. sara says:

    Hi Kai,
    Thank you very much for your sharing. I am a new Pythoner. Your posts really help me a lot. I was able to run the first part of the code. For the second part of the code, it also ran and the output file is a log.csv. As you said, the csv file contains the direct URL to the HTML-formating filing. Do you know what code can I use to get the HTML-formating filing directly? To make it clear, how can I save each filing as a HTML-formating file automatically? Thanks!

    • Kai Chen says:

      Hi Sara, there are many ways to do this. Just google “python download html” or something similar, you will find the solutions (e.g., use requests or urllib module).

  2. Sylvia Li says:

    Kai,
    Thank you for sharing your code. I am very new to python. I am learning python while trying to work on some data scraping from website (SEC, etc). I could run the first part without any issue. However, there are error messages when I try to run the second part, would you mind checking where might went wrong?

    —————————————————————————
    FileNotFoundError Traceback (most recent call last)
    ~\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\common\service.py in start(self)
    73 close_fds=platform.system() != ‘Windows’,
    —> 74 stdout=self.log_file, stderr=self.log_file)
    75 except TypeError:

    ~\AppData\Local\Continuum\anaconda3\lib\subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors)
    708 errread, errwrite,
    –> 709 restore_signals, start_new_session)
    710 except:

    ~\AppData\Local\Continuum\anaconda3\lib\subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session)
    996 os.fspath(cwd) if cwd is not None else None,
    –> 997 startupinfo)
    998 finally:

    FileNotFoundError: [WinError 2] The system cannot find the file specified

    During handling of the above exception, another exception occurred:

    WebDriverException Traceback (most recent call last)
    in ()
    16 start_time = time.strftime(‘%Y-%m-%d %H:%M:%S’, time.localtime())
    17
    —> 18 driver = webdriver.Chrome(‘./chromedriver’)
    19
    20 try:

    ~\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\chrome\webdriver.py in __init__(self, executable_path, port, chrome_options, service_args, desired_capabilities, service_log_path)
    60 service_args=service_args,
    61 log_path=service_log_path)
    —> 62 self.service.start()
    63
    64 try:

    ~\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\common\service.py in start(self)
    79 raise WebDriverException(
    80 “‘%s’ executable needs to be in PATH. %s” % (
    —> 81 os.path.basename(self.path), self.start_error_message)
    82 )
    83 elif err.errno == errno.EACCES:

    WebDriverException: Message: ‘chromedriver’ executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home

  3. Bob says:

    Hey Kai,

    thanks for your great tutorial! Its works just fine. But I got a question: Sometimes the loop “for url in urls:” in part 1 needs very long (or does not finish at all, I didnt let it run for a whole night). For example for 2017Q3 or 2011Q4 and some others. Getting all quarterly filings for 2015-2016 works fast and seems to be complete.

    Do you have any idea why this is happening?

    Beste regards!

    • Kai Chen says:

      Hi, thanks for your feedback. It’s an interesting question. I did a quick test on 2017Q3. It turns out to be this line—”lines = requests.get(url).text.splitlines()”—that drags the execution down. I don’t know why, but it appears to be related to the idx file itself.

Leave a Reply

Your email address will not be published. Required fields are marked *