Type: New Feature
Affects Version/s: None
Fix Version/s: None
As stated in comment on whirrs jira:
We should generate a big file (1-5gb?) for PageRank example. We wanted to add this as a part of the contrib, but we skipped/lost it somehow.
I started crawling several pages, starting from google news. But then my free Amazon EC2 qouta expired and had to stop the crawl.
> We need some cloud to crawl
> We need a place to make the data available
The stuff we need is already coded here:
Afterwards a m/r processing job in the subpackage "processing" has to be run on the output of the crawler. This job takes care that the adjacency matrix is valid.