Connect with us

Technology

Wars: From Weapons to Cyberattacks

Alexandra Goman

Published

on

Historically war focused on public contests which involve arms, e.g. Gentili’s concept of war. The main goal of such contests is to inflict damage to soldiers of an opposing side. Through this lens, cyberwar may be seen as a contest which perhaps involves certain arms. But it should be noted that these contests are very seldom public, mostly due to attribution problem. Even more, cyberattacks do not kill or wound soldiers; instead they aim to disrupt a property. It is, however, somewhat debatable, because such disruption of a system (like meddling with the nuclear facilities of Iran) may have an effect on both, civilians and combatants in a longer run. However, these secondary consequences are not the primary goal of a cyberattack, thus, there should be a difference between a cyberwar and a war.

The element of war being public is very important, as war is always openly declared. Additionally, an opposing side is given a chance to respond to the enemy by whatever means it deems necessary. In the context of cyberwar, this is more complicated. In case of cyberattacks, it is very difficult to determine the source and the initial attacker (more precisely, an attribution problem which is to be addressed further). Moreover, many attackers prefer to remain silent. This argument is further exacerbated by the lack of evidence. At this date the best example of cyber warfare, going somewhat public, is Stuxnet – not attributed to and officially admitted.

In the end, the attack became public but it was hidden for a year before its discovery. The specialists did notice the Iranian centrifuges malfunctioning[1] but they failed to identify the source of problems. This cyberattack was new because it did not hijack a computer or extort money; it was specifically designed to sabotage an industrial facility, uranium enrichment plant in Natanz.

However, attribution still falls behind. U.S and Israel are believed to launch Stuxnet, however they denied their involvement. Moreover, not any other country as officially admitted that. Based on the previous argument, for war to happen it has to be public. The case of Stuxnet or its similar computer programs does not therefore prove the case of cyberwar.

Moreover, if war is seen as a repeated series of contests and battles, pursued for a common cause and reason (for example, to change the behavior of the adversary), then there should be more attacks than just one. Nothing seems to preclude that one state may attempt launching a series of cyberattacks against an enemy in the future, which consequently be named a war. However, the adversary should be able to respond to the attacks.

Another view argues that the just war tradition[2] can accommodate cyberwar; however there are also some questions to take into consideration. In cyberwar, a cyber tool is just means which is used by military or the government to achieve a certain goal. This fits the just war tradition very well, because the just war tradition does not say much about means used in war. It is more focused on effects and intentions (See Stanford Encyclopedia of Philosophy Online).

The example of cyberweapons and the debate around them prove that they are discussed in the same way as any other evolving technology. If agents, effects, and intentions are identified, cyberwar should supposedly apply to the just war tradition similarly to any other types of war. However, cyber means has unique characteristics: ubiquity, uncontrollability of cyberspace and its growing importance in everyday life. These characteristics make cyberwar more dangerous, and therefore it increases the threat in relation to cyberwar.

Another useful concept of war to which cyber is being applied is the concept of war by the Prussian general Carl von Clausewitz. It presents the trinity of war: violence, instrumental role, and political nature (Clausewitz, 1832). Any offensive action which is considered as an act of war has to meet all three elements.

Firstly, any war is violent where the use of force compels the opponent to do the will of the attacker (Ibid., 1). It is lethal and has casualties. Secondly, an act of war has a goal which may be achieved in the end of the war (or failed to achieve in case the attacker is defeated). The end of war, in this sense, happens when the opponent surrenders or cannot sustain any more damage. The third element represents political character. As Clausewitz puts it, “war is a mere continuation of politics by other means” (Ibid., p. 29). A state has a will that it wants to enforce on another (or other) states through the use of force.  When applying this model to cyber, there are some complications.

Cyber activities may be effective without violence and do not need to be instrumental to work. According to Rid, even if they have any political motivation, they are likely to be interested in avoiding attribution for some period of time. That is why, he highlights, cybercrime has been thriving and was more successful that acts of war (Rid, 2012, p.16).  However, in all three aspects, the use of force is essential.

In the case of war, the damage is inflicted through the use of force. It may be a bomb, dropped on the city; or a drone-strike that destroys its target. In any case, the use of force is followed by casualties: buildings destroyed, or people killed. However, in cyberspace the situation is different. The actual use of force in cyberspace is a more complicated notion.

[1] International Atomic Energy Agency (2010). IAEA statement on Iranian Enrichment Announcement. [online] Available at: https://www.iaea.org/newscenter/pressreleases/iaea-statement-iranian-enrichment-announcement [Accessed on 28.12.2017].

[2] Jus bellum iustum (Lat.) – sometimes referred both as “just war tradition” and “just war theory”. Just war theory explains justifications for how and why wars are fought. The historical approach is concerned with historical rules or agreements applied to different wars (e.g. Hague convention). The theory deals with the military ethics and describes the forms that a war may take.  Ethics is divided into two groups: jus ad bellum (the right to go to war) and jus in bello (right conduct of war). (See Stanford Encyclopedia of Philosophy Online). In the text Cook applies cyberwar to the just war tradition, rather than theory. In his belief, “tradition” describes something which evolves as the product of culture (In Ohlin, Govern and Finkelstein, 2015, p. 16).

Use your ← → (arrow) keys to browse

Specialist in global security and nuclear disarmament. Excited about international relations, curious about cognitive, psycho- & neuro-linguistics. A complete traveller.

Continue Reading
Comments

Technology

Growth in the nanofiber market expected to continue to grow throughout 2019 and in 2020

Published

on

The field is now seeing phenomenal growth and investment as newer, slicker and cheaper technologies, such as electrospinning, are allowing for more and more applications, particularly in the field of drug delivery. 

Use of nanofiber is no new technology, in fact microfibers have been in use – particularly in the textile industry for many years. Even in the global filtration and separation technology market current forecasts for the coming year is that there will be growth of around 6% in demand, and that is before you factor in the explosion in alternative global drug delivery methods due to the increase in chronic diseases, new pharmaceutical products and technological advances. Major manufacturers are exploring the potential of the production of nanomaterials by electrospinning as the next big step forward for their business. 

What is electrospinning and how does it work? 

Put quite simply, electrospinning is the method by which nanomaterials are made. It is incredibly versatile, and the range of raw materials that can be used is very wide ranging, and can allow for different properties of the finished product. 

Starting with a polymer melt, using materials such as collagen, cellulose, silk fibroin, keratin, gelatin, or polysaccharides for example, chain entanglement takes place in the solution. An electrical force is then applied to pull threads of electrically charged materials into a jet that can be whipped or spun into fibers as the solvents in the solution are evaporated.

Finally the dry fiber is formed into a membrane or material, depending on the intended use. The material will have some great functional properties, such as a large surface area-to-volume ratio, high porosity and strength. 

Nanomaterials are revolutionising the advancement of new materials, and for companies looking to be the leaders in new developments and pushing industry forward with new technologies this is an area that will help them stay at the top of their game.  

Why is it worth the research and development?

With virtually limitless applications, electrospinning can be used in any industry. Not just in the production of textiles, where breathable, lightweight or protective clothing might be required, but also in the creation of filtration systems, and in medicinal and pharmaceutical products. 

It even has use in the packaging of food and other consumables, and there is some research being put into the creation of food. There are already companies who have managed to scale their electrospinning processes. 

The versatility of the process and the potential for creating groundbreaking new products is only part of the story. One of the other reasons this is a good direction to take your research and development team is because it is relatively quick and easy to set up with the help of a good electrospinning equipment company. There is a range of machinery available, from small worktop ‘proof of concept’ electrospinning machines for small laboratories, to large pre-production scale machines. It means that start up and installation costs are far lower in comparison to many other production processes. 

The user interface of this machinery has also advanced with the times, making it far simpler to operate and carry out the processes with a passing knowledge of polymers and electrostatics. Training up the workforce takes no time at all. The world is already seeing the benefits of this technology, particularly in the field of health and medicine. For example wound patches or organ membranes are artificially made and used during surgical procedures. Due to the molecular structure of the material it can graft with biological living tissue. And of course in the use of pharmaceutical implants and patches for the slow release of medicine. This is a field that will continue to grow as new discoveries are made.

Prev postNext post
Use your ← → (arrow) keys to browse

Continue Reading

Technology

9 disruptive technologies that will bloom before 2019 ends

Published

on

Technology-in-2019

Since the beginning of time, each new technological invention has meant a change of paradigm for the way people work. However, in recent years the frequency of changes has accelerated to such an extent that companies have to renew themselves and their daily procedures almost every season. Usually they are small changes or mere adaptations, but sometimes an innovation appears that makes the previous mechanisms obsolete. This is what is known as disruptive technology.

2019 is a disruptive year as far as technology is concerned: the trend of innovation continues at an accelerated pace, deepening the technological revolution. Innovative industries keep evolving and they are overcoming barriers only imaginable in Isaac Asimov’s sci-fi novels or in TV series and films such as Black Mirror or Gattaca. Check the technological trends that are making a disruptive change in the digital transformation.

1. 5G mobile networks

Some companies have started to launch pilot experiments of this kind of technology. 5G prepares the ground for navigating at speeds of up to 10 gigabytes per second from mobile devices.

2. Artificial intelligence (AI)

This will be the year of its definitive take-off. Included in the political agendas, the European Commission has made it one of the mandates for member states to develop a strategy on this matter by the middle of the year.

3. Autonomous devices

Robots, drones and autonomous mobility systems are some of the innovations related to AI. They all aim to automate functions that were previously performed by people. This trend goes beyond mere automation through rigid programming models, as it explores AI to develop advanced behaviors that interact more naturally with the environment and users.

4. ‘Blockchain’.

Finally, this technology it is no longer associated only to the crypto coins world, and experts are starting to notice its likely application in other fields. In congresses such as the annual IoT World Congress by Digitalizing Industries, -coming in october 2019-, we will witness the actual implementation of many projects based on ‘blockchain’, which will try to solve the challenges still faced by technology in different fields such as banking and insurance. It will also be a decisive year for the deployment of ‘decentralised organisations’ operating around smart contracts.

5. Advanced analytics

‘Big data’, is taking a step further with this trend, which combines this technology with artificial intelligence. Automatic learning techniques will transform the way data analysis is developed, shared and consumed. It is estimated that the capabilities of advances analytics will soon be widely adopted not only to work with information, but also to implement them in business applications of the departments of Human Resources, Finance, Sales, Marketing or Customer Service, in order to optimize decisions through a deep analysis of data.

6. Digital twins

Digital Twins are one of the disruptive technologies that will have more impact on the simulation and analysis of industrial processes. A digital twin is the virtual representation of a real-world entity or system capable to maximize the benefits of the digital transformation of companies. Many companies and organizations are already implementing these representations and will develop them over time, improving their ability to collect and visualize the right data, apply improvements to it, and respond effectively to business objectives.

7. Enhanced Edge Computing

Edge computing is a trend mostly applied to the Internet of Things. It consists of the location of intermediate points between connected objects in order to process information and perform other tasks in places closer to the reception of content by the user, in order to reduce traffic and latency in responses. This is a way to keep processing near the endpoint rather than on a centralized cloud server. However, instead of creating a new architecture, cloud computing and perimeter computing will evolve as models complementary to cloud services, managed as a centralized service that runs not only on centralized servers, but on local distributed servers and on the perimeter devices themselves.

8. Immersive experiences in intelligent spaces

Chatbots integrated into different conversation platforms and voice assistants are transforming the way people interact with the digital world, as are virtual reality (VR), augmented reality (AR) and mixed reality (MR). The combination of these technologies will lead to a profound change in the perception of everything that surrounds us through the creation of intelligent spaces where more immersive, interactive and automated experiences can be lived for a specific group of people or for specific scenarios in an industry.

9. Digital ethics and privacy

Digital ethics and privacy are issues of increasing interest to individuals, organizations and governments. It is no coincidence that people are increasingly concerned about how their personal information is being used by public and private sector entities, so in the coming months companies will be proactively addressing these concerns and to gain the trust of users.


Use your ← → (arrow) keys to browse

Continue Reading

Technology

You haven’t virtualized yet – why you should do so as soon as possible

Published

on

keyboard laptop virtualization

Virtualization is not a new thing, it has been around for some time now, and is one of the key ways a business can protect their IT infrastructure and reduce costs.

Opting for cloud vdi (virtual desktop infrastructure), is absolutely the way forward for businesses, but there could be many reasons why you haven’t been able to make the change yet.

Maybe you have not had a good enough network to support externally hosted desktops and applications, or you are a smaller business that is only just beginning to think of moving to a virtual enterprise structure. It could also be that you are suffering from the hangover of an older infrastructure with your own onsite servers and just coming to the end of the asset life time. Either way your next move should be to look at virtualization and here is why.

The savings can be substantial

Without a doubt the biggest reason is the cost savings you will make. Any company or business needs to be fully aware of the bottomline, and while the project to virtualize will need a little investment, long term it will save your business a lot more.

For example, you will no longer need onsite servers. Hardware is expensive to replace, and in order to keep up with technological investment they need to be replaced every few years. They also need to be upgrades, require server engineers to manage them, a specialised location to store them with adequate cooling and they use a lot of electricity. And this is before you even begin to think about the licences for the operating systems and applications.

Increased reliability and security

With security becoming so much more important, especially if you are holding any personal data, you need to be sure that you have adequate security measures in place to protect your IT services. Through application virtualization a data centre via the cloud, you can make sure that those provisions meet exactly what you need.

You can also increase the uptime and availability for your users, through better mirroring and failover provisions. Data centres are geared towards maximum uptime, and even should something go wrong with a server, users will like never even know as the services move over to alternative servers. To create and host this type of infrastructure yourself will require a whole IT department!

Increased productivity for your workforce

By moving to desktop virtualization your employees will be able to access their documentation and applications from almost any device. From mobile devices, tablets, laptops they will be able to do whatever they need, whenever and wherever they need it. For companies operating internationally or with a lot of travel involved this is absolutely vital.

It can also set the scene for flexible working – already proved to make the workforce much more productive. It also means that should a device breakdown, it is simple enough to switch to another.

Management of company devices is also a lot simpler, with setup and deployment happening remotely. All your installations, updates and patches, back ups and virus scans can be controlled centrally. It also means much better management of software assets.

In addition your service provider should be able to provide a whole range of support for your IT teams, with access to many disciplines and expertise to keep you running at your maximum 24 hours a day if needed.

Desktop virtualisation is definitely the way forward for any business. It makes end user environments much more secure. Reliability and uptime is better, which also keeps those end users happy and productive in their own work. No more lost working hours due to broken servers. Approached strategically, this can revolutionise your business and its operations well into the future.

Use your ← → (arrow) keys to browse

Continue Reading

Trending