Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 

README.md

Deadlinks crawler

There are several ways to implement website crawler. I mean Synchronous and Asynchronous arch, or regular http requests and PhantomJS (or other rendering engines). Or we can use threads. Due to the fact that nothing about it in tech reqirements I decided to make it pretty simple. It is not for production use because it is pretty slow (it uses regular "blocking" requests).

About

No description, website, or topics provided.

Resources

Releases

No releases published

Packages

No packages published

Languages

You can’t perform that action at this time.