Connect with us

Opinion

Universal Basic Income: In Action

Manak Suri

Published

on

Universal Basic Income presents a case to be considered, now stronger than ever, as automation hits us in recurring waves and machines are slowly beginning to take over much of the work that was done by human beings. If you aren’t caught up with the idea behind UBI and what its implementation could mean for us, you might want to read our previous article on the subject. Now, let’s focus our attention on the practicality of the scheme and let’s have a look at some of the existing cases of UBI in different parts of the world along with their results so far, and have a look at who all are backing the program to become the political norm in the coming decades.

Basic Income Around the World

Pilot programs of UBI have been held in different parts of the world. Besides Finland and Ontario, Canada, which began experimenting with UBI this year itself, countries such as Brazil, the city of Utrecht in The Netherlands and even Oakland in California, the United States have implemented varying UBI schemes on parts of their population. Hawaii passed a legislation this year which aims at forming working groups in order to study the effects of UBI. Elsewhere in the United States, the state of Alaska boasts of a genuine UBI program which has existed since 1976. The Alaskan Permanent Fund or the AFP, which is funded by oil reserves gives dividends to its permanent citizens every year. However, also in 2017, 77% of Swiss voters rejected a UBI proposal from being introduced in the country. Now lets take a look at the cases in Finland and Ontario individually.

In Ontario, Canada trials of UBI have recently begun involving 4,000 citizens between the ages of 18 and 64 who expect to receive a total of a little over 12,000 dollars a year. Couples are entitled to nearly twice of the amount. The plan is to study the effect of such a scheme on the health and well being of the subjects, their earnings and also their productivity. While the experiment is still in its initial stages and it’s too early to make any decisions, a few of the participants have already shown signs of a positive change. In addition to boosting the income of some who fall in the lower end of the earning bracket, the scheme has greatly improved their mental well-being, enabled them to afford healthier food and pay more attention to their health and visit their families more often. Other participants have also expressed that the safety net provided to them by the scheme has enabled them to focus on work they want to do and also on helping others.

Unsurprisingly, automation was a major factor in the decision to introduce the scheme in the province. Ontario’s Premier Kathleen Wynne said of the same,” I see it on a daily basis. I go into a factory and the floor plant manager can tell me where there were 20 people and there is one machine. We need to understand what it might look like if there is, in fact, the labour disruption that some economists are predicting.” Ontario has a population of nearly 14 million people. Positive results overall from the scheme may further result in its adoption throughout the province including all 14 million of its populace.

In January this year, 2,000 people were randomly selected across Finland to be part of a trial of one of the most advanced UBI schemes in the world, undertaken by the Finnish Social Insurance Institute. Under the scheme, the selected citizens receive  €560 (£495) from the Finnish government regardless of their employment status and how much they earn. Some of the cases in the country show that UBI provides the participants with more flexibility in their working pattern, encourages entrepreneurial spirit to take up what one would like to do, and also create more time and focus on volunteering and charity work.

Ms. Sini Martinnen, one of the beneficiaries of the program who noted the above changes in her lifestyle spoke of the same. “So there’s value in other things you do – if there’s just not enough work for everyone you have to figure out how to inspire people to be creative and do other kinds of stuff”, she said. Her statement falls in line with the kind of environment being created due to automation. Otherwise, you will have a lot of different social problems that will be very expensive – more expensive than the basic income system”, she added.

What Studies Say

The success of UBI among small populations in Ontario or in a Scandinavian country such as Finland does not necessarily mean they will work in larger countries. UBI may be introduced in a limited form in parts of UK, where we may be able to better assess the case of basic income and its viability over a population that is more diverse and layered with complexity at every step of implementation.

Nevertheless, studies on the matter precede trials in many countries. Considering the point, a recent study by the Roosevelt Institute on universal basic income and its effects suggests that the introduction of UBI in the United States would boost the US Economy by $2.5 trillion. The study, titled “Modeling the Macroeconomic Effects of a Universal Basic Income” suggests that a UBI of $1,000 a month to every American adult will lead to a growth of the economy by 12.56 percent over a period of 12 years, resulting in an increase in the GDP of the country by nearly $2.5 trillion. It reaches the statement after considering three different versions of unconditional cash payments. Another assumption taken up in the study to arrive at the given conclusion was that room for UBI in the budget would be made by increasing the deficit and not by increasing taxes. “When paying for the policy by increasing taxes on households rather than paying for the policy with debt, the policy is not expansionary,” the study says. “In effect, it is giving to households with one hand what it is taking away with the other. There is no net effect.”

In the midst of these studies, theories and trials UBI has attracted support and critics from all corners. In the final article of this series, we’ll take a look at who all are backing the program to become the political norm in the coming decades and what are the pros and cons tossed around in the debate around universal basic income.

Use your ← → (arrow) keys to browse

A student of economics with a keen eye for developments in the geopolitical sphere, Manak is a curious individual with a penchant for writing about anything that makes him ponder long enough.

Continue Reading
Comments

Opinion

The History Question: Is It Better to Remember or to Forget?

Published

on

Years ago, a philosopher by the name of George Santayana said a phrase that fuels many debates to this day. His original saying is “those who cannot remember the past are condemned to repeat it”, although, many sources now present it as variations of “those who cannot learn from history are doomed to repeat it”. The latter definitely has more substance to it in the light of the ongoing debate about how much history we should be learning and how.

Is It Better to Remember or Forget About the Past?

On one hand, Santayana was right. Learning about the past is essential in order for people to progress. One also shouldn’t overlook the importance of remembrance and paying respects to the dead, both those who pushed the progress forward and those who have fallen victims to major tragedies that could and should have been averted.

The main argument in favor of learning about the past is that its knowledge is necessary for preventing the same thing happening in the future. Having it one can see the signs and stop the tragedy before it gains momentum.

That’s sound in theory, but the reality is always different. For example, today people are surely forgetting, and the much-critiqued education system is only partially at fault here. Even the greatest of tragedies weren’t spared this fate. It’s a proven fact that about two-thirds of millennials today don’t know about the Holocaust, and this number is surely greater for generations that follow them. In the school history course, the subject of one of the greatest disasters in history is barely touched, if touched at all. And outside of a history classroom, one can only see small, but terrifying, glimpses of it at the Holocaust Museum and other museums that rarely attract many visitors. And now we are witnessing a rise of antisemitic crime.

Are these two facts related? Does the lack of awareness about the horrors done in the name of Aryan supremacy contribute to the fact that right-winged extremists seem to be gaining popularity again?

It does, but by how much? That is the question that no one can truly answer.

And what about other genocides? The Holocaust had the highest death toll, but it was far from the only genocide in history. And quite a few of those happened after World War 2 and before the memory of the atrocities against the Jews began to fade. This means that while forgetting history is a factor, it’s not the deciding factor in its repeats.

But what is that thing responsible for the reenactment of past mistakes and tragedies?

Learning. This is the important thing that is most often overlooked when citing Santayana’s famous saying. It’s not enough to learn about the past and know the facts of things that happened. It’s important to learn from those facts and put in place protections that will prevent them from happening again. And this is something that humanity, as a whole, has yet to succeed in doing.

Dwelling in the Past Can Be Just As Bad

One also shouldn’t forget that there is such a thing as “too much history”. The Bosnian War and genocide that happened there in the 1990s is a vivid example of how the past can be exploited by political powers. Used as a part of propaganda, which fueled the war, history can become a weapon in the hands of those who want to use it for their own goals.

And this is what humans have been doing since the dawn of time. There is always someone who will use any means necessary to achieve whatever it is they wish. This results in wars and genocides, and hundreds of smaller but no less devastating tragedies.

Therefore, the problem isn’t whether people should be learning history but human nature itself. Perhaps, teaching this can help fix this fundamental flaw and truly stop the worst of the past from repeating.

Prev postNext post
Use your ← → (arrow) keys to browse

Continue Reading

Opinion

Is there such thing as cyberwar?

Alexandra Goman

Published

on

Two decades have passed after Arquilla and Ronfeldt in 1993 warned the public about an upcoming. They were also the first to introduce a concept of cyberwar and give an elaborated opinion. They referred to a conduct and preparation of military operations using information-related principles and also invoked a link between intelligence (the collection of information for political or military purpose) and cyber operations. Now, the scale of intelligence has significantly expanded.

Interestingly, before cyber appeared, there was a radio which was used for intelligence purposes and was weaponized later in the World War II. From that time on, electronic warfare became standard characteristics of a modern conflict. Despite this, there is a key difference between electronic warfare and a cyber one. Traditional electronic warfare aimed to guide, target, or protect weapons systems (Ibid., p. 24). In contrast, cyber makes today’s weapons and military systems smarter but also more vulnerable for an attack.

At the moment everyone still wonders what the whole idea of cyberwar means. There is no accepted interpretation or definition. Furthermore, many experts even say that such war does not even exist (or cannot be referred to the notion of “war”). Perhaps, it is due to the fact that a war in cyberspace has not yet happened. To make it clear, cyber capability has not actually killed anyone and a code has not been used as the use of force.

Similarly, the dangers of a nuclear bomb were recognized only after its use, the same goes to the notion of “nuclear war”. Although there have been many cyberattacks, none of them have been raised to the level of war because none of them, in fact, caused the level of damage which could be adhered to the level of a large-scale conflict.

Cyber warfare has derived from different aspects of conventional warfare and traditional definitions of war. It usually involves organized units within nation-state in offensive or defensive operations which are part of a war or a conflict.

In general, since cyber study is relatively new, there are many competing terms and definitions to explain cyber phenomenon. The following concepts – the revolution in military affairs, electronic warfare, information warfare, and cyber war – have been all offered to describe the new emerging area of conflict. Experts do not agree on any particular term, more often using different notions when talking about cyber issues. Nonetheless, it is vital to understand the facts of the 21st century similarly to the need that rose along with the invention of atomic reaction. A major concern now is no longer weapons of mass destruction, but weapons of mass disruption. (2009, p. 47).

One of the central elements to define a cyberwar, is that it has to meet the same criteria, applied to any other type of war. Vandalism or spying is an act of crime, but they do not start wars. So, assumingly, there has to be physical destruction and casualties in order to declare a war.

Therefore, a cyberwar should have real world damage similar to a conventional war. For this matter, it should probably take place in a digital world. What is not clear, however, is whether it should be fought exclusively in cyberspace or it can accompany a conventional attack too. This aspect is quite interesting, because cyberattacks can easily be used in combination with a kinetic attack and can multiply the force and power of the attacker.

In this case, it does not make sense to create a new term “cyberwar” as it falls down under the same definition of war. It is the same example when aerial bombings supported the attacks on the ground during the World War I, but in the end we called it a war, not a particular type of war. Consequently, cyber introduction resembles more a revolution in military affairs, rather that a new emerging type of warfare.

What is clear, though, is that the difference in definitions complicates the matters of regulating cyberspace and prevents achieving a common ground on cyber issues and/or developing new treaties and agreements between the states. So far there is no international agreement on the cyber principles, despite some attempts of the states to engage into negotiations (Budapest Conference on Cyberspace, the World Conference on International Telecommunications). There is, however, the Convention on Cybercrime, the first international agreement that addresses compute crime, adopted by the Council of Europe. Interestingly enough, Russia (as a part of the Council) neither signed nor ratified the agreement, whereas US (not part of the Council) recognized it and ratified it.

Apart from these difficulties in defining cyberwar, there has been a hyperbolic use of the word itself, mostly by media and tabloids (e.g. The Washington Post, “We are at cyberwar and we are our own enemy”; The New York Times, “How to prevent Cyberwar”; Zdnet, “Cyberwar: a guide to the frightening future of online conflict”; Komsomolskaya Pravda, “Are we expecting the First World Cyberwar?” etc.). They do not usually give any concrete information but are eager to use this term and apply it randomly to different cases just because it sounds good.  All in all, uninformed public use of the word has enormously contributed into the heat surrounding cyber implications.

Futher, cyberattacks are too often discussed equivalently, regardless of its impact. In this sense, minor cases like ransomware or phishing might be raised to the level of an armed attack (especially if they affect multiple computers worldwide). Yet, these cases are good examples of cybercrime, and crime is not a war. When individuals engage into this type of activity, they do not engage in a war.  The same goes for espionage in cyberspace. Catching a spy on one’s territory will certainly put pressure on bilateral relations, but it would not start a war.

This exaggeration of cyberattacks can be explained through securitization theory. The notion offered by the Copenhagen Security School describes how a certain concept can be politicized and securitized to the extent that it becomes a threat to national security (See Buzan, 2006).

To conclude, it should be mentioned that there is no guidance for the conduct of “cyberwar”.  There are no internationally agreed definitions and, to that extent, the whole idea of cyberwar so far seems unrealistic. At this moment technology is not sophisticated enough to ensure a military conduct entirely in cyberspace. Besides, any cyberattack of such scale would presumably result in a physical destruction, which consequently might provoke a conventional retaliation attack. This, in result, would cause a war we know for years, so there is no need to introduce a particular type of war. On another note, using cyber operations to support a conventional war and/or conflict is the way to go, but in this case it is just a revolution and modernization in military affairs.

I would be interested to hear your opinion about that in the comments below.

For further information see:

1)    A movie “War Games” (1983)

2)    Arquilla, J. and Ronfeldt, D. (1993). The Cyberwar is Coming! RAND Corporation, [online] Available at: https://www.rand.org/pubs/reprints/RP223.html

3)    Cetron, M. J. and Davies, O. (2009). Ten critical trends for cyber security. The Futurist, 43(5), pp. 40–49.

4)    Stiennon, R. (2015). There Will Be Cyberwar: How The Move To Network-Centric War Fighting Has Set The Stage For Cyberwar. Michigan: IT-Harvest Press.

Use your ← → (arrow) keys to browse

Continue Reading

Opinion

On the issue of cyber security of critical infrastructures

Alexandra Goman

Published

on

There is a lot of talk in regards to cyberattacks nowadays. A regular user worries about its data and tries to secure by all means necessary. Yet, no one really thinks whether the power plants or nuclear facilities are well secured. Everyone assumes that they should be secured.

The reality, however, differs. According to many reports of cyber security companies, there is an increased risk of cyberattacks, targeting SCADA and ICS. Supervisory Control and Data Acquisition (SCADA) is used for the systems that control physical equipment – power plants, oil and gas pipelines, they can also control or monitor processes such as heating or energy consumption. Along with Industrial Control Systems (ICS) they control critical elements of industrial automation processes. Exploiting vulnerabilities of critical infrastructures can lead to the consequences of unimaginable scale. (These types of attacks are actually used in a cyberwar scenarios and hypothetical military settings).

Source: Fortinet, 2015

There are many reasons why these systems are vulnerable for attacks. First of all, the main problem is that these systems have an old design; they were built before they were connected to any networks. They were later configured to connect via Ethernet, and that’s when they became a part of a larger infrastructure. The more advanced SCADA system is becoming, the more vulnerabilities are these to exploit. The updates should be regular and on time. Secondly, there is a lack of monitoring. New devices that are connected allow remote monitoring, but not all devices have the same reporting capabilities. There are also authentication issues (weak passwords, authentication process), however, this is supposed to restrict unauthorized access (See Common SCADA Threats and Vulnerabilities at Patriot Technologies, Inc. Online).

In these scenarios, there is no certainty to know what is going to backfire because of the complexity of communications and power networks. This is also called a cascading effect of attacks. Not knowing who is connected to who may cause major disruptions. The example of the US East Coast power blackout in 2003 proves this point (a failure in one element of the grid spreads across other electrical networks). However, given this, it is also complicated for an attacker to predict consequences, if an attack executed. This kind of attack can easily escalate into more serious conflict, so it might not be the best option for states to employ such methods.

Moreover, there is a risk to damage a critical infrastructure unintentionally. That is if a virus or worm did not intend to target SCADA but happen to spread there as well. The uncontrollability of the code may seriously impair the desire to use it, especially when it comes to nation-states. For instance, in 2003 a worm penetrated a private network of the US Davis-Besse Nuclear Power Station and disabled a safety monitoring system for 5 hours. In 2009, French fighter jets could not take off because they were infected with a virus.

Indeed, a scenario where an attacker gains access to a SCADA system and manipulates with the system, causing disruptions on a large-scale, might be hypothetical but it does not make it less possible in the future. However, the only known case so far, which affected an industrial control centre, is Stuxnet. It did not result in many deaths, yet it drew attention of the experts on the plausibility of future more sophisticated attacks. These potential upcoming attacks might cause the level of destruction, comparable to that of a conventional attack, therefore resulting in war.

Further reading:

Bradbury, D. (2012). SCADA: a Critical Vulnerability. Computer Fraud & Security, 4, p. 11-14.

Use your ← → (arrow) keys to browse

Continue Reading

Trending