Connect with us

Technology

Concerns and Limitation of Cyber Warfare

Alexandra Goman

Published

on

cyberwarfare stuxnet

The discovery of Stuxnet, a malware that targeted a nuclear facility, was somewhat revolutionary and groundbreaking. It targeted ICS which monitor and run industrial facilities. Before that, most of malicious programs were developed to steal information or break-in into financial sector to extort money. Stuxnet went beyond went and targeted high-level facilities. It is not hard to imagine what damage it could have inflicted if the worm were not detected. What is more worrisome, the technology is out. It might not be perfect, but it is definitely a start. Regardless of the intentions behind Stuxnet, a cyber bomb has exploded and everyone knows that cyber capabilities indeed can be developed and mastered.

Therefore, if they can be developed, they will probably be. The final goal of Stuxnet was to affect the physical equipment which was run by specific ICS. It was done in order to manipulate computer programs and make it act as an attacker intended it to act. Such a cyberattack had a particular motivation; sabotage of industrial equipment and destruction could have been one of the goals. So, if they were indeed the goals, it might have been an offensive act, conducted by an interested party, presumably, a state for its political objective. Yet, there are certain limitations when it comes to so-called “cyber weapons” (malware that might be employed for military use or intelligence gathering). 

One of the main concerns of cyber offence is that code may spread uncontrollably to other systems. In terms of another physical weapon, it is like a ballistic missile that anytime can go off-course and inflict damage on unintended targets and/or kill civilians. Cyber offensive technology lacks precision, which is so valued in military. For example, in ICS and SCADA systems one may never know what can backfire because of the complexity of the system.  The lack of precision consequently affects military decisions. When launching a weapon, officers should know its precise capabilities; otherwise, it is too risky and is not worth it. 

In case of Stuxnet, the program started replicating itself and infected computers of many countries. For this moment we do not know if it were planned in that way.  However, provided that that target was Natanz facility, it is unlikely. Symantec Corporation started analyzing the case only with external help; it did not come from Natanz. This exacerbates the case if a country decides to launch an offensive cyberattack.

If the military planning cannot prevent cyber technology to go awry or to go out in the public, it brings more disadvantages than advantages.  Moreover, given a possibility of the code being discovered and broke down to pieces to understand what it does, it may potentially benefit an opposing party (and any other interested party along the way). This is unacceptable in military affairs.

Similarly, when the code is launched and it reaches the target, it can be discovered by an opponent. In comparison to nuclear, when a bomb explodes, it brings damage and destruction, but its technology remains in secret. In case of cyber, it may not be the case, as when a malware/virus is discovered, it can be reverse engineered to patch vulnerability. By studying the code, an enemy would find out the technology/tactics used that could be unfavourable in the long-run for the attacker.

Additionally, it should be said that not every malware is meant to spread by itself. In order to control the spread, vulnerability can be patched, meaning updating the software which had that vulnerability. An anti-malware can also be introduced; this will make the computer system immune to that particular vulnerability. Nonetheless, if the malware spreads uncontrollably, there is nothing much that an attacker can do. It is not possible to seize the attack. In this scenario, an attack may only release information about this certain vulnerability so that someone else can fix it. However, a state is highly unlikely to do so, especially if the damage is extensive. It would not only cost the state diplomatic consequences, but also it might severely impact its reputation.

An AI-enabled cyberattack could perhaps fulfill its potential. That means involvement of artificial intelligence. AI systems could make digital programs more precise, controlling the spread. In contrast, it could also lead to a greater collateral damage, if a system decides to target other facilities that may result in human death. Similar concerns are raised in the area of autonomous weapon systems in regard to the need of leaving decision-making to humans and not to technology. AI technology has a potential to make existing cyberattacks more effective and more efficient (Schaerf, 2018).

Aforementioned concern leads to another and affects the end result. When a certain weapon is employed, it is believed to achieve a certain goal, e.g. to destroy a building. With cyber capabilities, there is no such certainty. In the case of Stuxnet, the malware clearly failed to achieve its end goal, which is to disrupt the activities of the industrial facility.

Alternatively, the true costs of cyberattacks may be uncertain and hard to calculate. If that is so, an attacker faces high level of uncertainty, which may also prevent them from a malicious act (particularly, if nation states are involved). However, the costs and the benefits may always be miscalculated, and an attacker hoping for a better gain may lose much more in the end (e.g. consider Pearl Harbour).

Another concern refers to the code becoming available to the public. If it happens, it can be copied, re-used and/or improved. Similar concerns in regards to proliferation and further collateral damage emerged when Stuxnet code became available online.  An attacker may launch a cyberattack, and if it is discovered, another hacker can reverse engineer the code and use it against another object. Moreover, the code can be copied, improved and specialized to meet the needs of another party. Technology is becoming more complex, and by discovering a malware developed by others, it also takes less time to produce a similar program and/or develop something stronger. (For instance, after Stuxnet, more advanced malwares were discovered – Duqu and Flame).

Furthermore, there are other difficulties with the employment of cyber offensive technology. In order to maximize its result, it should be supported by intelligence. In case of Stuxnet, an offender needed to pinpoint the location of the facility and the potential equipment involved. It has to find zero-days vulnerabilities that are extremely rare and hard to find[1]. Cyber vulnerability is all about data integrity. It should be reliable and accurate. Its security is essential in order to run an industrial infrastructure.

After pinpointing vulnerability, security specialists need to write a specific code, which is capable of bridging through an air-gapped system. In case of Stuxnet, all of abovementioned operations required a certain level of intelligence support and financial capability. These complex tasks involved into development were exactly the reason why Stuxnet was thought to be sponsored and/or initiated by a nation state. If intelligence is lacking, it may not bring a desirable effect. Moreover, if cyber offense is thought to be used in retaliation, malicious programs should be ready to use (as on “high-alert”) in the event of necessity.

Regardless of some advantages of cyber offence (like low costs, anonymity etc), this technology appears to be unlikely for a separate use by military. There is a high level of uncertainty and this stops the army of using technology in offence. Truth is when you have other highly precise weapons, it does not make sense to settle for some unreliable technology that may or may not bring you a wanted result. Yet, other types of cyberattacks like DDoS attacks can give some clear advantages during military operations and give an attacker some good cards in case of a conflict. When such attacks used together with military ground operations, they are much more likely to bring a desired result.


[1] For better understanding, out of twelve million pieces of malware that computer security companies find each year, less than a dozen uses a zero-day exploit.

Use your ← → (arrow) keys to browse

Specialist in global security and nuclear disarmament. Excited about international relations, curious about cognitive, psycho- & neuro-linguistics. A complete traveller.

Continue Reading
Comments

Blog

For Enea Angelo Trevisan and Ealixir, better than solving the problem of cyberbullying is preventing it

Published

on

bully cyberbully

One of the most commentated news regarding Instagram for the past weeks was their investigation on whether to ban likes counter on the platform or not, but mostly from the perspective of marketing strategy. It turns out that the social media platform is actually considering this new feature as a means to avoid a much bigger problem: cyberbullying.

A recent studied carried by Pew Research Center showed that fifty-nine percent of teens reported to have experienced at least one of six types of abusive online behavior, cyberbullying included. Another concerning fact brought by the study shows that 16% of these teens were already subject of physical threat of some kind due to incidents in social media.

In addition to that, a report published by the Journal of Abnormal Psychology has highlighted the popularity of smartphones among teenagers – a statistic that only grew during the past seven years. “More U.S. adolescents and young adults in the late 2010s, versus the mid-2000s, experienced serious psychological distress, major depression or suicidal thoughts, and more attempted suicide,” stresses the study’s lead author, Jean Twenge, who also wrote the book iGen, in which he ponders about the influence of smartphones in teenage and child mental health.

Besides hiding how many likes a photo has received, Instagram is also considering another feature: a “nudge” alert that is activated while the user is still writing a comment that is flagged as potentially aggressive. According to the head of Instagram, Adam Mosseri, this could give an extra incentive for people to think twice before committing to an attack.

“Of all the obnoxious activities that can be carried out on the web, cyberbully is in my opinion the worst”, says Ealixir’s CEO and founder Enea Angelo Trevisan. “Cyberbullying targets those who cannot defend themselves: often minors or minorities. This is why one of our priorities as a company is to invest our technology in the fight against this plague.” In that sense, Ealixir gives support to individuals by making an early detection of offensive and troublesome contents, so they can be immediately erased and monitored to avoid further reloading. 

For Trevisan, the case for cyberbullying starts in schools, and this is the reason why Ealixir is also responsible for organizing sessions with children, so they can be warned about the dangers of the internet. “At this young age, kids think of internet as a huge playground. We teach them not to trust strangers and to think about the consequences of their virtual actions, exactly like in real life,” he explains.

Moreover, families also need to be aware of their children’s presence on the internet – they should not underestimate the possibilities and dangers of giving a smartphone to a child or a teen. “This is due to the fact that older generations were born and raised without the web, so they struggle to identify with their children. With Ealixir, we try and fill in this gap most of all through prevention, but also actively by deleting offensive contents and/or preventing harassment.

Besides monitoring and removing offensive contents published online, Ealixir also gives support to families and individuals who found themselves victims of cyberbullying by offering contact with specialized lawyers that can handle a case with expertise in the court. However, as much as in the case for health, prevention is the best scenario when coming to cyberbullying too, so internet literacy becomes an important competence to be learned by children for a healthier future of the web.

Sources: https://www.theladders.com/career-advice/how-instagram-plans-to-take-a-stand-against-cyberbullying

Prev postNext post
Use your ← → (arrow) keys to browse

Continue Reading

Technology

Growth in the nanofiber market expected to continue to grow throughout 2019 and in 2020

Published

on

The field is now seeing phenomenal growth and investment as newer, slicker and cheaper technologies, such as electrospinning, are allowing for more and more applications, particularly in the field of drug delivery. 

Use of nanofiber is no new technology, in fact microfibers have been in use – particularly in the textile industry for many years. Even in the global filtration and separation technology market current forecasts for the coming year is that there will be growth of around 6% in demand, and that is before you factor in the explosion in alternative global drug delivery methods due to the increase in chronic diseases, new pharmaceutical products and technological advances. Major manufacturers are exploring the potential of the production of nanomaterials by electrospinning as the next big step forward for their business. 

What is electrospinning and how does it work? 

Put quite simply, electrospinning is the method by which nanomaterials are made. It is incredibly versatile, and the range of raw materials that can be used is very wide ranging, and can allow for different properties of the finished product. 

Starting with a polymer melt, using materials such as collagen, cellulose, silk fibroin, keratin, gelatin, or polysaccharides for example, chain entanglement takes place in the solution. An electrical force is then applied to pull threads of electrically charged materials into a jet that can be whipped or spun into fibers as the solvents in the solution are evaporated.

Finally the dry fiber is formed into a membrane or material, depending on the intended use. The material will have some great functional properties, such as a large surface area-to-volume ratio, high porosity and strength. 

Nanomaterials are revolutionising the advancement of new materials, and for companies looking to be the leaders in new developments and pushing industry forward with new technologies this is an area that will help them stay at the top of their game.  

Why is it worth the research and development?

With virtually limitless applications, electrospinning can be used in any industry. Not just in the production of textiles, where breathable, lightweight or protective clothing might be required, but also in the creation of filtration systems, and in medicinal and pharmaceutical products. 

It even has use in the packaging of food and other consumables, and there is some research being put into the creation of food. There are already companies who have managed to scale their electrospinning processes. 

The versatility of the process and the potential for creating groundbreaking new products is only part of the story. One of the other reasons this is a good direction to take your research and development team is because it is relatively quick and easy to set up with the help of a good electrospinning equipment company. There is a range of machinery available, from small worktop ‘proof of concept’ electrospinning machines for small laboratories, to large pre-production scale machines. It means that start up and installation costs are far lower in comparison to many other production processes. 

The user interface of this machinery has also advanced with the times, making it far simpler to operate and carry out the processes with a passing knowledge of polymers and electrostatics. Training up the workforce takes no time at all. The world is already seeing the benefits of this technology, particularly in the field of health and medicine. For example wound patches or organ membranes are artificially made and used during surgical procedures. Due to the molecular structure of the material it can graft with biological living tissue. And of course in the use of pharmaceutical implants and patches for the slow release of medicine. This is a field that will continue to grow as new discoveries are made.

Use your ← → (arrow) keys to browse

Continue Reading

Technology

9 disruptive technologies that will bloom before 2019 ends

Published

on

Technology-in-2019

Since the beginning of time, each new technological invention has meant a change of paradigm for the way people work. However, in recent years the frequency of changes has accelerated to such an extent that companies have to renew themselves and their daily procedures almost every season. Usually they are small changes or mere adaptations, but sometimes an innovation appears that makes the previous mechanisms obsolete. This is what is known as disruptive technology.

2019 is a disruptive year as far as technology is concerned: the trend of innovation continues at an accelerated pace, deepening the technological revolution. Innovative industries keep evolving and they are overcoming barriers only imaginable in Isaac Asimov’s sci-fi novels or in TV series and films such as Black Mirror or Gattaca. Check the technological trends that are making a disruptive change in the digital transformation.

1. 5G mobile networks

Some companies have started to launch pilot experiments of this kind of technology. 5G prepares the ground for navigating at speeds of up to 10 gigabytes per second from mobile devices.

2. Artificial intelligence (AI)

This will be the year of its definitive take-off. Included in the political agendas, the European Commission has made it one of the mandates for member states to develop a strategy on this matter by the middle of the year.

3. Autonomous devices

Robots, drones and autonomous mobility systems are some of the innovations related to AI. They all aim to automate functions that were previously performed by people. This trend goes beyond mere automation through rigid programming models, as it explores AI to develop advanced behaviors that interact more naturally with the environment and users.

4. ‘Blockchain’.

Finally, this technology it is no longer associated only to the crypto coins world, and experts are starting to notice its likely application in other fields. In congresses such as the annual IoT World Congress by Digitalizing Industries, -coming in october 2019-, we will witness the actual implementation of many projects based on ‘blockchain’, which will try to solve the challenges still faced by technology in different fields such as banking and insurance. It will also be a decisive year for the deployment of ‘decentralised organisations’ operating around smart contracts.

5. Advanced analytics

‘Big data’, is taking a step further with this trend, which combines this technology with artificial intelligence. Automatic learning techniques will transform the way data analysis is developed, shared and consumed. It is estimated that the capabilities of advances analytics will soon be widely adopted not only to work with information, but also to implement them in business applications of the departments of Human Resources, Finance, Sales, Marketing or Customer Service, in order to optimize decisions through a deep analysis of data.

6. Digital twins

Digital Twins are one of the disruptive technologies that will have more impact on the simulation and analysis of industrial processes. A digital twin is the virtual representation of a real-world entity or system capable to maximize the benefits of the digital transformation of companies. Many companies and organizations are already implementing these representations and will develop them over time, improving their ability to collect and visualize the right data, apply improvements to it, and respond effectively to business objectives.

7. Enhanced Edge Computing

Edge computing is a trend mostly applied to the Internet of Things. It consists of the location of intermediate points between connected objects in order to process information and perform other tasks in places closer to the reception of content by the user, in order to reduce traffic and latency in responses. This is a way to keep processing near the endpoint rather than on a centralized cloud server. However, instead of creating a new architecture, cloud computing and perimeter computing will evolve as models complementary to cloud services, managed as a centralized service that runs not only on centralized servers, but on local distributed servers and on the perimeter devices themselves.

8. Immersive experiences in intelligent spaces

Chatbots integrated into different conversation platforms and voice assistants are transforming the way people interact with the digital world, as are virtual reality (VR), augmented reality (AR) and mixed reality (MR). The combination of these technologies will lead to a profound change in the perception of everything that surrounds us through the creation of intelligent spaces where more immersive, interactive and automated experiences can be lived for a specific group of people or for specific scenarios in an industry.

9. Digital ethics and privacy

Digital ethics and privacy are issues of increasing interest to individuals, organizations and governments. It is no coincidence that people are increasingly concerned about how their personal information is being used by public and private sector entities, so in the coming months companies will be proactively addressing these concerns and to gain the trust of users.


Use your ← → (arrow) keys to browse

Continue Reading

Trending