Skip to main content


Co-Captain @Armada Digital (goal: coop)
| Love to promote #Indieweb #CivicTech #Dataviz #RSS Curious about #Blockchain and the future of work by enabling #platformcoop

Should appear as "Live" when I'm broadcasting Live with @periscope @RickMendes


rMdes_

rikmendes

rickmendesg

rikmendes

mendesr

rime

hello@rmendes.net

rick.mendes

mendes

www.loomio.org/u/rmdes

mastodon.indie.host/@rmdes

Euwatchers

PolBegov

refind.com/rMdes_?invite=2dc94f96da

Ricardo Mendes

The Brussels Civic Data Hack: Episode 1 #civictech #digityser #dataviz

3 min read

The Brussels Civic Data Hack: Episode 1

Why

Like nearly everywhere in Europe, Belgium is undergoing a crisis of trust between citizens and their legitimate institutions. Brussels and Wallonia, in particular, are having a bad year.
Scandals, corruption, lack of ability for the political parties to clean up their rank is pushing toward a narrative where the whole PS is corrupt for example, when in fact depending on the scandal, take for instance, CDH and MR are part of this system, even though PS is taking most of the heat thanks to the fact that it was in power in the last 25 years or so.

What if we could precisely visualise the data and spot possible conflict interests between institutions, public services, private business and political parties ?

What if we could explain the problem of the accumulation of mandates and their possible conflict of interest through some state of the art visualisation that can help anyone, without any related skill to form his own opinion on these issues ?

We believe we, ordinary citizens, can help. We wish to help by using data, open data in particular, to investigate problems for ourselves, and share our results with our fellow citizens so that they can used to draw factual stories about these scandals allowing citizens to have a better perception of the issues at stake hence able to make better choice in the next electoral round (2018-2019).

What

We wish to organise a small event called a civic hackathon. Everybody interested in a civic use of data and data science is welcome, without exception and with no skill requirements. It’s about gathering under one roof to work with data with a civic purpose in mind. It could be anything: applications tracking parliament activities; crunching and visualisation of societally relevant datasets; dreaming up infographics that would make a complex issue more accessible, and so on.

There is already a open-data movement in Belgium and civil society initiatives like Cumuleo to track mandates and maintain these information in the public sphere. We do not want to reinvent the wheel, so we want to invite you all and see what we can do together.

How

We envision two steps. In step 1, we gather together for a few hours to hash out some possible projects to develop in the hackathon proper: we make sure each project has the data it needs and a skeletal crew with all the skills to pull it off.

In step 2, two weeks or so later, we take one or two full days to actually develop projects.

The results of the different projects are presented to the whole group at the end of step 2. All code is released as open source. Processed data are published as open data wherever the license of the primary data allows it.

Who

Absolutely everyone is welcome. All you need to take part in a civic hackathon is a passion for all things civic. Every skill is needed. Among them (but there are others) are: software development, statistics, math, journalism, law, design, communication, storytelling. Female and minority participants (however you want to define “minority”) are particularly welcome.

We are Ricardoand Alberto. We commit to working the hardest, but welcome any help anyone can offer.

When

September?

Ricardo Mendes

Ricardo Mendes

Last Year I was discussing with a friend about a browser extension geared to inform the public about the political, corporate, ads-business and type of news sources linked to each news website.

There could be a difference between State media,(think RT) Public media (think RTBF), Privately owned media (think RTL) and if any of these 3 types receive public fundings and how it's being used.

The extension should be able to show political/corporate ties between the owners of a particular news source but also show the links between data-brokers/ads and the owners of the website.

If the news source receive any type of subsidies, funding or donations from political/corporate individuals this should also be presented visually to the user.

The only extension that provides this type of information on top of US politicians is http://allaregreen.us/
This extension show the links between money and congress.

But what if it could be expanded to show much more contextual data between Governments Civil Servants, media property owners, ads-datatracking-brokers, economic links between entities and corporate donations to these particular outlets.

What if by going to a news website you could understand the political colors, interests and bias of any news sources just by looking at the financial/political ties underlying their narrative ?

I'm convinced this information is important to better understand the collisions and revolving doors between media, corporations, politics and lobbying.

The datasets that would power this extension could be open source, public, transparent and fact-checked by multiple peers allowing a greater transparency and reliability of the data.

The question is where do I start ?

Ricardo Mendes

I need to find/code a command line tool that store tweets metadata in real time a bit like flocker.outliers.es but on the terminal. I want to be able to trigger the command when new storms arise with the ultimate ability to save the data on the GEXF format file so I can import it back to Gephi/Neo4j or other tools. The next step would be to make this code able to detect new social media storms or "moments" and start storing metadata when any particular rise above a certain thresholds. the idea is that not always you find yourself at the beginning of a social media storm. the code need to be able to detect in some measure possible storms and store the data for later analysis.
another feature would be to have a nice dashboard to browse & visualize the stored using - of course this tool would need to be able to dive into the Twitter API to realize all these possibilities but we all know Twitter constant limitations on using it's API, so I was wondering about the possibilities to use "web scrapping" instead of API methods and bypass over-limitations by Twitter like going back in time more then X days or more then X Tweets. I'm not sure what would be the best language to go ahead with this idea, but maybe NodeJS or Python ?

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.