Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
We live in the future. No, we don’t drive flying cars to work nor do we fly around on jetpacks in our spare time but make no mistake the future is real when it comes to business. Everything is connected, everything is automated, and if its not then it could be. The questions are, should it be? If it should then how do we as citizens and more pertinently leaders of business act in an ethical manner when making business decisions that affect the welfare, safety, and security of our customers and employees?
Keywords: Safety, Cybersecurity, Identity Theft, Automation, Ethics
Effects of Modern Information Systems on Ethics
As time marches on so to does entropy of the universe, the business world reflects this increase in chaos and becomes a much more complex place. Technologies change how business is done. Governments change what you are allowed to do. Customers want ever better products and a more personal experience and want to provide you with less and less to do so. Navigating this world of chaos is difficult for the business leaders of today, and doing so while stepping on as few toes as possible even more so. How then do we conduct business in an efficient and profitable manner pleasing our stakeholders without endangering our customers or employees? The bleeding edge is bloody for a reason, you don’t get to be there without some pain, and absorbing that pain in lieu of passing it onto those whose safety we are responsible for is requisite.
To get a better perspective on how to handle these topics a literature review must be performed, core topics for review are going to be Information Security/Cybersecurity Preparedness specifically thinking about business continuance and customer data protection, and Automation Safety, both in traditional manufacturing systems and also looking at the concerns that the slow introduction of Pseudo-Artificial Intelligence are bringing to a world that is still adjusting to the idea that cars are built by robots.
Perhaps one of the most complex issues that has developed over time and recently come to a head is thinking about how to handle company assets that aren’t physical. The idea of needing physical security is as old as time immemorial, and the solutions on how to do it are reasonably obvious even to a layman. Build walls, have guards, locks. Prevent physical access, you can’t steal something if you can never get to it. The problem with Cybersecurity is that while you and your team can erect metaphorical cyberwalls that would theoretically prevent access but unlike a real wall if a cyberwall is weak you don’t notice it. The “bricks” you use to build it might even have a flaw in them allowing unwanted access, so the question is what should you do to be ready?
After the terrorist attacks on the United States in 2001 there was a renewed interest in ensuring the continuation of business and critical infrastructure in the United States should more attacks ever come. The Patriot Act attempted to define this critical infrastructure and Aaron Sedgewick (2018) of the National Institute of Standards and Technology pulled this definition for it; “Critical infrastructure is defined in the U.S. Patriot Act of 2001 as ‘systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.’” (2018)
Since one of the things that the Department of Commerce and in turn NIST have attempted to create is a guide to cybersecurity preparedness, both how to do it and how to grade your company on it. Version 1.0 was released in 2014 and due to the cross functional team the created it from all reaches of government, public industry and academia has been widely adopted voluntarily across the United States and internationally. (Heurgo, 2018) Version 1.1 was released this year to much fanfare and included significant updates to the language that make it less confusing when pitching to stakeholders who are not required to adopt it as well as adding the self-assessment section, additional thoughts on technology related to Cybersecurity, and multitudinous other areas. These updates were driven by the huge amount of feedback NIST received on the first draft as well as several workshops for the creation a version two among others. (Sedgewick 2018)
Figure 1. Cybersecurity Focus by Nations courtesy of https://ipacso.eu/26-economics-of-cyber-security/189-indicators-and-metrics.html
Its perhaps no surprise then that the US is rated as a nation who places a lot of focus on Cybersecurity. With the premier guidelines to cybersecurity being generated by its Government it becomes very clear that it realizes the threat that exists, both to the nation and its people for lax cybersecurity. It is both a benefit and a bane being the leader in technology because not only must you develop great ideas and products but you also must defend the knowledge because everyone else wants it. So the question becomes what is actually the “correct” way of maintain a secure cyber presence as NIST views it?
Figure 2. The Core Structure of the NIST Framework as included in the NIST Framework for Improving Critical Infrastructure Cybersecurity Version 1.1
NIST really identifies 5 Core Components of their framework for Cybersecurity; Identifying the assets, people, data and systems that are at risk. Essentially what are the things we are safeguarding and who is doing the safeguarding. Once we have a strong understanding of what needs to be protect developing systems to actual put in place to play defense and prevent problems from happening in the first place. Perhaps most important of all is the detection component of the framework, worse than being breached and losing data is not knowing it happened at all, if a problem is detected then it can be reacted to and consequences planned for, of course detection of a problem requires a response to shut the metaphorical leak down, it does no good to move onto the 5th component of recovery if you haven’t been able to respond effectively to an incident. (Sedgewick 2018) Of course all of these components have multitudinous facets and discussing them exhaustively lies outside of this document’s scope, but the document referenced from NIST is free to access for everyone and I would encourage a thorough reading of it time permitting.
All of this information on preparedness is great and provides a lot of insight into how we can attempt to be cybersecure, but we also know that almost weekly someone has a security breach leading to the loss of their customers or employees information. So just how far along in the business world in becoming cybersecure and developing a culture that puts a focus on it?
Figure 3. High Level Metrics For Cybersecurity Preparedness courtesy of https://www.thinkbrg.com/newsroom-publications-cybersecurity-preparedness-benchmarking-study-report.html
The Berkeley Research Group attempted to take a snapshot of industry’s own views on cybersecurity preparedness in 2016 and found very mixed results. While more than half of companies believe their cybersecurity culture is well developed and effective almost no one believes it to be completely effective. Part of that is the natural unease that comes from bragging about your security, its almost inviting people to test it and that’s a terrible idea, but part of its that cybersecurity is all to often treated as an IT problem and that is a recipe for failure, cybersecurity is a major risk for companies and managing it successfully must be done at the enterprise level (BRG 2016).
Workplace safety is a much more visceral problem for business leaders to consider. While protection of employee, customer, and business critical data is key for a firms longevity, there is something much more personal about ensuring the safety of the people who work with you. Perhaps this is due to the more relatable nature or perhaps just the problems at hand are much more visual in nature. Automation safety in particular is a very soft spot, maybe even sore spot for the industry. While a standard Pick and Place robot is no more dangerous than a hydraulic press or another older piece of machinery, the eerily human nature of them puts their actions under even higher levels of scrutiny. The name robot has unfortunately gained some negative connotations over time due to their portrayal in media for years, from Gort in “The Day the Earth Stood Still” to the Terminator series they are often powerful entities that largely don’t have an huge amount of regards for humans or their surroundings. Nonetheless the manufacturing industry relies on robots to advance manufacturing capabilities to build bigger, better, and ultimately cheaper things.
Figure 4. Statistics on the frequency of articles published on “Intelligent Manufacturing” broken out by source type and time period. Adapted from Intelligent Manufacturing in the Context of Industry 4.0: A Review doi:10.1016/j.eng.2017.05.015
As you can see from Figure 4 above the amount of research and thought that has gone into the next generation of automated intelligent manufacturing is exploding. The industry is preparing to undergo a 4th industrial revolution and the idea of integrating physical systems with people is becoming increasingly core to the idea. Where the 3rd industrial revolution was the addition of basic computers and robots in place of humans the 4th will be much more about them working together hand in hand, potentially literally in some cases. That raises the need for safety controls even further. New safety rating methods are even being created, along side a process’s normal risk priority number for quality problems a safety risk priority number is also being generated. (Silvestri, De Felice, & Petrillo, 2012) Ensuring a process is smart and safe is the next big thing for manufacturing, but we are reaching a point in time where we build processes and manufacturing systems that are going to be asked to do things over time that were simply not in the original scope of the process. Adaptability like that is the future. This opens up an entirely new can of worms for safety however. For humans and automated machines to work together efficiently you need a new level of interfacing between them. Essentially you must have a dedicated safety brain for the process that understand where everyone is and what they should be doing based on the process at that moment. (Perkon)
Figure 5. An example of the industry best practice Safety Lifecycle for ensuring safe relations between humans and robots, retrieved from https://www.controleng.com/articles/human-robotic-collaboration-what-will-osha-say/ courtesy of Rockwell Automation
The framework shown above is an example of the Safety Life Cycle approach to machine and process design that is championed as the industry best practice. At its core the requirement is to think about all the different cases where humans and robots will interact in a potentially dangerous fashion and to think on a global scale how the process needs to be operating during those various finite states to make sure there is minimal danger to the humans, process, and product. (Hoske 2013) One of the complications at a business leader level that really needs to be viewed carefully is that realistically the industry is outpacing the regulations placed on it. This is a problem particularly relevant to this document because of the ethical concerns it raises. Companies should not have to be regulated into ethical behavior and taking care to maintain a focus on safety and commitment to employee well being while on the cutting edge of technology is a great opportunity to show that. Thankfully as companies are chomping at the bit to integrate new technologies in a grey zone OSHA and the other regulatory bodies get more and more exposure to the ideas at hand and are forced to catch up themselves allowing for minimal lag time and risk (Hoske 2013).
After looking into these topics from the perspective of how they SHOULD be done, reflecting on the real world and its own foibles and failings allows for us as business leaders to learn from the mistakes of others hopefully allowing our businesses to avoid making those mistakes ourselves. As we saw in Figure 3, just putting a lot of focus on a topic doesn’t ensure any degree of success by a long shot, and best intentions don’t always amount to anything. That is really just all the more reason to review failures and analyze how they occurred. For this section we will be looking at three real world failures, two instances of failures to protect employee and customer data, and one brief review on the safety perils of next generation automation because it is a topic where failures occur on a much less grand scale and naturally receive much less scrutiny.
In 2013 Target was used as a channel for mass data theft from its customers. Names, Credit and Debit Card Numbers as well as their security features such as expiration date and CVV2 Numbers were lost on perhaps the largest scale in history, at that time. It is almost certainly the most costly cybersecurity incident in terms of direct money outlay.
Figure 6. Total losses suffered by Target as a result of 2013 Data Breach, courtesy of https://interset.com/2017/02/09/target-breach-keeps-taking/target-losses/
Approximately 2.5 Billion Dollars in losses, with the vast majority of that money being money paid out and not just hypothetical value lost, should raise some serious eyebrows and raises great questions as to how as breach of this scope occurred and how it can be prevented from re-occurring either at Target or any firm that details with customer financial data.
Perhaps the scariest thing for a business leader about the attack, besides the loss of 2.5B in value, is that it arguably wasn’t Target’s fault. Not directly at least. The best summary, in the author’s opinion, of what happened for a layperson was put together by Tracy Kitten of the Information Security Media Group and much of the information contained her is based on review of that report with some added commentary based on further technical analysis of the situation.
Perhaps the most painful thing about the attack is that it was performed via the Target internal network, specifically the avenue of attack used was the Enterprise level update servers used to make sure that all Point of Sale systems were on the latest firmware version ensuring they were as secure as possible when handling customers’ financial data. A 3rd party gained access to this network by a phishing attack on a firm that had done contracting work for Target. Once they gained access to Target’s network they were able to use the contracting firms credentials to make modifications to the updates sent out to Point of Sale terminals. The question that determines the amount of blame that rests on Target is should the contractor whose access was used, have had the access necessary to make those changes. Without the internal details of what systems the contractor worked on its impossible to know if they should have been able to access these sensitive systems or not. One of the unfortunate foibles of cybersecurity is that someone always has to have the power to do bad things, because the power to fix things and break things are often the same. That’s why NIST’s framework that was discussed has strong access control as one of its core pillars of protection and identification. Making sure that people have only the minimum power needed to do their job and access to nothing that is not required goes a long way towards preventing breaches. Looking at the attack avenue and Target’s response to the breach its quite hard to find ethical failings on their part. The amount of money they have spent on trying to maintain customer goodwill and protect their customers from ongoing issues related to the breach is staggering and suggests that they are taking responsibility for the failure. The fact that the attack was performed via Target’s internal network that forced security updates to be installed to prevent attacks like this also shows, in an unfortunate way, that they seem to have been committed to protecting their customers data.
It was suggested that the Target data breach was the largest of all time, and at the time it probably was. It has sense been passed, in severity and scope by the Equifax data breach of 2017. Nearly 150 million people were subject to this breach and the data that was disseminated was far worse. It can’t be pretended that having your credit card information stolen and a large bill racked up isn’t terrifying and inconvenient but it does pale in comparison to having your passport, drivers license, name, address, contact information, and social security number stolen. That is potentially life ruining on a scale like not many things can be. The most frustrating thing is that many of the affected people never once chose to do business with Equifax, never had contact with them never did any business whatsoever.
Where the attack on Target was tragic because of how cleverly it was performed and avoided detection. The attack on Equifax is tragic because of how basic it was and how it did not avoid detection and simply no one cared. According to an article summarizing a report from the U.S. General Accounting Office by Glenn Fleishmann of Fortune magazine the breach was actively leaking information for seventy-six days before being detected. The detection system that was broken and being ignored had been offline for 10 months at the time of the breach. The avenue of attack was perhaps the exact opposite of Target’s. It was simply a server that no one had bothered to update. The congressional testimony described it as “forgotten”.
Comparing these two events really shows the cultural difference that can exist surrounding cybersecurity. Where Target would seem to get a pass for its efforts in making its mistake right and the avenue of attack being their own good will, Equifax simply hasn’t seemed to care. The only reparations they attempted to make were forced by the U.S. Government and they attempted to defraud the affected people of their right to seek reparations by fine-printing them during sign up to check if they were affected. Tack on the multiple insider trading lawsuits filed against employees from the C-suite level down to individual software developers who realized the breach had happened and dumped their stock and you really don’t look good in front of a grand jury. Where I can find almost no cultural or ethical fault with Target, I can’t find anything positive to say about Equifax. If Target was a shining beacon of ethical behavior with regards to Cybersecurity then Equifax is a blackhole.
The fortunate thing about researching industrial safety incidents is that they almost invariably involved very few people, we don’t live in a world where robots frequently go haywire and hurt people. This means overall industrial safety receives much less attention probably than it should. That said there was one instance in 2015 of a Volkswagen contractor who was killed by an industrial robot that is worthy of reviewing in the context of ethical business decisions.
In 2015 Volkswagen was retrofitting one of its plants in Kassel Germany with new robotic processes. During installation one of the contractors was killed by the robot he was installing. (Bryant 2015) The exact details were never publicized due to them being technical in nature but it raises the question of how something like this happens in Germany which has strong regulation on automation safety? The answer breaks down to, Industrial Regulations only really apply to production workers. During the setup process there are no rules and regulations that must be followed. This goes back to the problem we found in the literature review about being on the edge of technology and out growing regulations. Asking employees to do something unsafe due to a vague technicality of the law is not something that should be happening.
Without a deep understanding of German regulations and the technical aspects of this fatality its difficult to talk failures in the process and how they treat the idea of ethics in safety on a more standard level. Unfortunately that doesn’t make the situation at hand reflect any less poorly on the team working with this contractor at Volkswagen.
Business ethics is a topic that is very difficult to talk about for several reasons. The first being very rarely do people outside of a company ever get a full perspective of the situation. That can change how a problem is viewed for the better or worse. For example, if the attack on Target didn’t receive as much technical scrutiny it would be much easier to criticize them for their errors. The other reason is much less noble, and it is simply that it is very easy to look at the ethical failings of another with no regard for the problems one has themselves. Ethical quandaries are named as such because they are complex. If ethical problems were as easy as always taking the high road and everyone always did so the discussion would be much less valuable. Tying that into the idea of modern information systems and advancing technology just reinforces the problem. As business leaders we live in a world that we don’t always know how to navigate perfectly. To some degree ethics are subjective based on the society we live in’s stance on things and the nature of technology and advancement is that we run into problems that are potentially ethical in nature before society has decided what it things about that problem.
- Bryant, C. (2015, July 01). Worker at Volkswagen plant killed in robot accident. Retrieved December 9, 2018, from https://www.ft.com/content/0c8034a6-200f-11e5-aa5a-398b2169cf79
- Fleishman, G. (2018, September 8). Equifax Data Breach, One Year Later: Obvious Errors and No Real Changes, New Report Says. Retrieved December 9, 2018, from http://fortune.com/2018/09/07/equifax-data-breach-one-year-anniversary/
- Hoske, M. T. (2013, August 31). Human-robotic collaboration: What will OSHA say? Retrieved from https://www.controleng.com/articles/human-robotic-collaboration-what-will-osha-say/
- Huergo, J. (2018, April 16). NIST Releases Version 1.1 of its Popular Cybersecurity Framework. Retrieved December 7, 2018, from https://www.nist.gov/news-events/news/2018/04/nist-releases-version-11-its-popular-cybersecurity-framework
- Kitten, T. (n.d.). Target Breach: What Happened? Retrieved December 8, 2018, from https://www.bankinfosecurity.com/target-breach-what-happened-a-6312
- Perkon, D. (n.d.). Basic automation safety requires not-so-basic safety. Retrieved December 8, 2018, from https://www.controldesign.com/articles/2017/basic-automation-safety-requires-not-so-basic-safety/
- Sedgewick, A. (2018). Framework for Improving Critical Infrastructure Cybersecurity, Version 1.1. doi:10.6028/nist.cswp.04162018
- Zhong, R. Y., Xu, X., Klotz, E., & Newman, S. T. (2017). Intelligent Manufacturing in the Context of Industry 4.0: A Review. Engineering,3(5), 616-630. doi:10.1016/j.eng.2017.05.015
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: