… according to ADR 2013:
Cerca nel blog
… according to ADR 2013:
by Francesco Flammini, IEEE Senior Member
I am quite sure most of the readers know well about the unreliability of first computers, when ‘bugs’ could be real insects and not programmers’ faults. Well, after many years, most computers continue to be quite unreliable, mainly due to the increasing complexity which is often not well mastered by software engineers. One may probably tolerate hang-ups, blue screens, and even wrong results when running software on personal computers without getting too angry and frustrated; however, nobody would even think to accept the risk of bugs causing accidents in brake-by-wire or any other critical control systems. That is why the latter are developed and tested in a way which is significantly different, more rigorous and time consuming, while the same effort would not be justified for non-critical systems. But there are situations in which you may still have faults regardless of how much effort you put in the software development: think for instance to cosmic radiation, which may cause bit flips in condenser-based memories, or compiler faults, which are out of your control. In those and other situations, engineers rely upon redundancy, that is the use of more modules performing the same task, and diversity, that is the differentiation of programmers and development tools in order to avoid the same faults to show up in different modules. Redundancy can be spatial, with modules operating in parallel, or temporal, with modules operating sequentially. In any case, the output of modules in compared in order to check if they agree on the same results. In other words, a concept is employed which is similar to the one used in politics when an important decision has to be made by checking the opinions of different people: just a few in case they are well experienced and educated on the matter, a lot more in case there are few warranties about their knowledge and skills. Well, democratic decision making may be imitated to fuse decisions coming from different (or differently installed) sensors, processors, or any other computing devices. A basic knowledge of probability theory ensures that if:
the probability that they are both wrong is (very) low, that is (much) lower that the probability of A or B being wrong singularly. It is rather intuitive that the same concept can be extended to larger populations of individuals. After all, there are few doubts about which is the most valuable help in “Who wants to be a millionnaire?”…
Formally speaking, in majority voting among M individuals, a decision is taken according to the fact that the condition represented by the following formula is satified (YES) or not (NO):
Now, majority voting is exactly the concept used in the so called N-modular redundant computer architectures, where different processors, electrically segregated and running diversely developed programs, run in parallel and their results are compared in order to reach an agreement on which output can be considered correct with a certain, quantifiable level of dependability.
Are there any differences among reaching a consensus with majority voting in computer systems and with human beings? Well, the answer is yes: in the Web 2.0 era, the assumption that people do not influence each other seems not realistic. In fact, discussions on Facebook and other social networks have been shown to be able to relevantly bias opinions. Furthermore, in politics the answer to important questions is often not merely correct or wrong, but it is related to taking the right (i.e. most saviour) decision considering the context, the expected long term consequences as well as the well-being of the highest number of citizens. However, intuition suggests that web-driven majority voting could still provide some of the advantages mentioned above for computer systems.
First of all, let’s say that – on average – people trust computers more than they trust politicians. From an engineering point of view, perhaps the reason lies in the fact that – though coming from different parties – governments are often affected by the so called ‘common mode failures’: they tend to be made up by people sharing the same will to get a ‘return on the investment’ and featuring limited technical skills. The cost for the society of having thousands of them instead of hundreds (or tens, depending on the case) would be overly high. In fact, the costs associated to politicians tend to be quite high, and the general trend is toward reduction.
Now, a quite obvious question raises: since we do not trust so much politicians, as citizens shouldn’t we govern our countries and cities by ourselves? After all, in all those years we have raised our average level of education and developed all the enabling technologies. Unfortunately, so far it seems that e-voting is considered mostly a mean to securely substitute the traditional ballot with an electronic one. Not many socio-technical studies address the issue of distributed agreement involving a large number of heterogenous individuals as a standard mechanism to support governments in everyday decision making.
Nobody would even think of being governed by shy and solitary geniuses, due to their limited social and communicative skills; however, it is a pity people like them will never play an active role in politics. Depending on their expertise, their opinion could be essential, much more than the ones of less educated individuals. I would say their judgement should be weighted even more. Wouldn’t it be meritocracy at its essence?
I think we should go further in developing a better way for involving smart people in politics, allowing them to participate in the decision making process of local authorities and to join extended experts committees on the base of their resume. And all without the stress of elections, commuting or changing jobs. The enabling and secure ICT tools are already there or may be developed quite easily. The still open issue is how to combine and organize those tools in a way to optimise the decision making process in local and central governments, improving the quality of politics and reducing the costs for the citizens.
Call it e-democracy, e-government, e-participation or direct democracy, all the related paradigms have something to do with ensemble-based voting in decision making, which is the simplest way for achieving a reliable result out of possibly unreliable sources. Just like in safety-critical computers.
 Parhami, B.: A taxonomy of voting schemes for data fusion and dependable computation. In: Reliability Engineering and System Safety, Vol. 52, No. 2, May 1996: pp. 139-151
 Polikar, R.: Ensemble based systems in decision making. In: IEEE Circuits and Systems Magazine, Vol. 6, No. 3, Third Quarter 2006: pp. 21-45
 Rios Insua, D., French, S. (Eds.): e-Democracy. Springer, Advances in Group Decision and Negotiation, Vol. 5, 1st Edition, 2010
 Wikipedia entry on ‘Computer’ and ‘Bugs’: http://en.wikipedia.org/wiki/Computer#Bugs
Excerpts from the CIS Book Review by Michael Greenberg (*) on the Wiley journal Risk Analysis (Vol. 32, No. 8, 2012):
“[…] Critical Infrastructure Security provides “the most up to date compendium of critical infrastructure” literature.
The contributors are government, private, and university experts from Europe and North America; several are Society for Risk Analysis (SRA) members.
[…] the book contains five parts and 19 chapters. “Fundamentals of Security Risk and Vulnerability Assessment” are addressed in Part I, which contains chapters about models and vulnerability assessment. Part II offers four chapters about modeling and simulation tools, including game theory and graphical simulation tools. Part III focuses on cyber security and supervisory control and data acquisition (SCADA) systems, and Part IV has five chapters about monitoring and surveillance technologies. The last four chapters, Part V, are about integrating security systems and using alarms.”
“The book is a handy reference, especially for cyber security, sensors, and several other subareas. Some of the chapters are particularly well done. My favorites were vulnerability assessment, game theory, information technology risks, intelligent video surveillance, terahertz for weapon, and explosive detection.”
“If I was teaching a course about critical infrastructure […], I would then pick from among the following books, each of which has a focus on a specific area, such as cyber security, structures, and so on. Critical Infrastructure Security fits into this last set of books, with the clear advantage of presenting recent advancements […]”
(*) Dr. Michael Greenberg, Professor and Director of the National Center for Neighborhood and Brownfields Redevelopment of Rutgers University; Director of the National Center for Transportation Security Excellence, and Associate Dean of the Faculty of the Bloustein School of Planning and Public Policy, Rutgers University, New Jersey (USA). Dr. Greenberg studies urban environmental, health and neighborhood redevelopment policies.
Cozzolino, A., Flammini, G., Galli, V., Lamberti, M., Poggi, G., Pragliola, C.: Evaluating the effects of MJPEG compression on Motion Tracking in metro railway surveillance. In: Proc. 14th Intl. Conf. on Advanced Concepts for Intelligent Vision Systems, ACIVS 2012, Sept. 4-7 2012, Brno, Czech Republic, J. Blanc-Talon et al. (Eds.), Springer LNCS 7517, pp. 142–154 (Springer-Verlag Berlin Heidelberg, Germany, ISBN 978-3-642-33139-8)