Skip to main content

Co-Captain @Armada Digital (goal: coop)
| Love to promote #Indieweb #CivicTech #Dataviz #RSS Curious about #Blockchain and the future of work by enabling #platformcoop

Feel free to share Files with me using my Nextcloud federated ID:

Should appear as "Live" when I'm broadcasting Live with @periscope @RickMendes











Ricardo Mendes

The Brussels Civic Data Hack: Episode 1 #civictech #digityser #dataviz

2 min read


Like nearly everywhere in Europe, Belgium is undergoing a crisis of trust between citizens and their legitimate institutions. Brussels, in particular, is having a bad year. This feeds on news both real and fake, leaving citizens disoriented and ever more mistrustful.

We believe we, ordinary citizens, can help. We wish to help by using data, open data in particular, to investigate problems for ourselves, and share our results with our fellow citizens so that they can be replicated and cristicized. No need to trust fake news when you can go straight to the source yourself!


We wish to organise a small event called a civic hackathon. Everybody interested in a civic use of data and data science is welcome, without exception and with no skill requirements. It’s about gathering under one roof to work with data with a civic purpose in mind. It could be anything: applications tracking parliament activities; crunching and visualisation of societally relevant datasets; dreaming up infographics that would make a complex issue more accessible, and so on.


We envision two steps. In step 1, we gather together for a few hours to hash out some possible projects to develop in the hackathon proper: we make sure each project has the data it needs and a skeletal crew with all the skills to pull it off. In step 2, two weeks or so later, we take one or two full days to actually develop projects. The results of the different projects are presented to the whole group at the end of step 2. All code is released as open source. Processed data are published as open data wherever the license of the primary data allows it.


Absolutely everyone is welcome. All you need to take part in a civic hackathon is a passion for all things civic. Every skill is needed. Among them (but there are others) are: software development, statistics, math, journalism, law, design, communication, storytelling. Female and minority participants (however you want to define “minority”) are particularly welcome.

We are Ricardo, Philippe and Alberto. We commit to working the hardest, but welcome any help anyone can offer.




Ricardo Mendes

Ricardo Mendes

Last Year I was discussing with a friend about a browser extension geared to inform the public about the political, corporate, ads-business and type of news sources linked to each news website.

There could be a difference between State media,(think RT) Public media (think RTBF), Privately owned media (think RTL) and if any of these 3 types receive public fundings and how it's being used.

The extension should be able to show political/corporate ties between the owners of a particular news source but also show the links between data-brokers/ads and the owners of the website.

If the news source receive any type of subsidies, funding or donations from political/corporate individuals this should also be presented visually to the user.

The only extension that provides this type of information on top of US politicians is
This extension show the links between money and congress.

But what if it could be expanded to show much more contextual data between Governments Civil Servants, media property owners, ads-datatracking-brokers, economic links between entities and corporate donations to these particular outlets.

What if by going to a news website you could understand the political colors, interests and bias of any news sources just by looking at the financial/political ties underlying their narrative ?

I'm convinced this information is important to better understand the collisions and revolving doors between media, corporations, politics and lobbying.

The datasets that would power this extension could be open source, public, transparent and fact-checked by multiple peers allowing a greater transparency and reliability of the data.

The question is where do I start ?

Ricardo Mendes

I need to find/code a command line tool that store tweets metadata in real time a bit like but on the terminal. I want to be able to trigger the command when new storms arise with the ultimate ability to save the data on the GEXF format file so I can import it back to Gephi/Neo4j or other tools. The next step would be to make this code able to detect new social media storms or "moments" and start storing metadata when any particular rise above a certain thresholds. the idea is that not always you find yourself at the beginning of a social media storm. the code need to be able to detect in some measure possible storms and store the data for later analysis.
another feature would be to have a nice dashboard to browse & visualize the stored using - of course this tool would need to be able to dive into the Twitter API to realize all these possibilities but we all know Twitter constant limitations on using it's API, so I was wondering about the possibilities to use "web scrapping" instead of API methods and bypass over-limitations by Twitter like going back in time more then X days or more then X Tweets. I'm not sure what would be the best language to go ahead with this idea, but maybe NodeJS or Python ?

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.