Encryption, AI, and the IoT word on the street

Encryption, AI, and the IoT word on the street

One of the last stories I read about at the end of 2016 was about a hedge fund company and its use of AI, (Hedge fund turns to AI…https://nakedsecurity.sophos.com/2016/12/30/hedge-fund-turns-to-ai-to-navigate-through-the-maze/).  The algorithms they use for trading stocks have been in place for quite some time now.  The story struck me not because of this.  What I found compelling was their intent on applying AI to their business operations.  Bridgewater Associates has what they call a “Principles Operating System” or PriOS for short.  It will act as an arbitrator when management is in gridlock on decisions.  The hopes are that within five years, most day to day management decisions will be performed by AI.  Never mind that factory worker whose repetitive task is replace by mechanical devices, enter cognitive and intellectual decisions made by silicon.

My son and I not long afterward found in interesting online video game called “Fallout Shelter”, (https://en.wikipedia.org/wiki/Fallout_Shelter).  It’s amusing and has some interesting aspects about how management works.  The basis of the game is to place a work force in key positions to obtain the best results.  The work conditions, work force, and responses lead to positive or negative outcomes based on management decisions.  It really does a good job at this.  As the saying goes, there is no such thing as free lunch, this also applies to free games.  I’m not sure exactly how algorithms are formed to make PriOS like systems.  However, I suspect that using readily available trends from thousands or millions of gamers is a potent element.  Who would have imagined an 8 year old shaping the decisions of one of the world’s largest hedge funds.

Artificial Intelligence, commonly referred to as AI, is essentially algorithms.  They are complex flowcharts that follow a decision making path based on conditions, constance, feedback, and input.  In its present state, the term AI should stand for Algorithmic Infrastructure.  Kevin Slavin’s lecture in 2011 illustrates how stock market algorithms are shaping infrastructure for that nano second advantage over the competition.  He points out that physical location inherently determines the response time for electronic communication.  With at least 70% of stock market activity executed by algorithmic functions, these response times are critical.  As a result, communication hubs such as the Carrier Hotel in New York are seeing a trend.  The blocks surrounding it are less occupied by humans and more by silicon.

Christopher Steiner’s lecture goes on to say that the AI functions that dominate the stock market also perform psychological profiling.  Personality traits have long been used in government as a function of determining placement in mission critical design.  This method decreased the likelihood that compromise would occur.  The match made in heaven functions are now used commercially.  The message when calling in for support that states “your call may be monitored or recorded for training purposes” is likely an algorithm on the ready to profile your personality as you speak.  He points out that the haves and have nots will be determined by the lines drawn by algorithms in the next 20 years.  This can be exciting or disturbing depending on what side of the line you may end up.

The ethics of AI is a bigger challenge than understanding what AI is doing.  The driving force behind AI is economic stability.  Economic stability binds us to modes and methods that typically are not improvements to the long term of humanity, but the immediacy of today.  AI concerns are expressed in lectures from Jeremy Howard, Nick Bostrom, and Stefan Wess.  The lack of understanding and control have had consequences.  The flash crash of May 6th, 2010 is an indicator of AI manipulation.  Many resources were used to uncover what had occurred, https://en.wikipedia.org/wiki/2010_Flash_Crash.  If the losses were smaller or only impacted a few, would the same resources to uncover the cause be employed?  The answer to that question fuels the unease.

With behavioral science rooted in the drive behind AI, there are concerns about privacy. This has prompted the creation of laws, such as those to protect public privacy from internet service providers.  This law, which is enforced through the FCC, defines how ISPs can gather and market internet usage of its subscribers.  With recent political changes, it will likely be reversed.  Consumer protection is by and large a responsibility of the Federal Trade Commission, commonly refereed to as the FTC.  The issue is the trading of personal information for profit.  If ISPs offered compensation to its subscribers in exchange for data marketing, then privacy might be a moot point.  The full disclosure of how the data is used remains clouded.  This limits ones ability to make an informed decision about personal privacy.  Human internet activity isn’t the only player now with the rise of the internet of things.

The internet of things, or IoT, is a device that typically doesn’t interact directly with humans and has connectivity to the internet.  Here’s an example of IoT.  Digital video recorders capture video from cameras that are connected together with a network.  The DVR would be accessed by a human and they would interact with it.  However, the DVR is what interacts with the cameras, not the human.  The same is true for any device that has some type of controller.  Access points are a good example.  All though we may connect to them, we don’t interact with them.  In actuality, our interaction with APs is brief and we then move along to interact with something else, like browsing the internet, checking email, or getting on social media.  Those APs and cameras are just flouting in internet space and they remain connected even when we aren’t.  They share a common weaknesses that are inherent to all computers, the operating system.  In most cases, these devices use hard encoded programs burned on silicon chips.  This is commonly refereed to as firmware.  Many systems have the ability to upgrade firmware, while others do not.  This is a severe security issue.

There have been documented steps on how to reverse engineer firmware to reveal the inner workings of devices.  Lets take our internet camera as an example, since it was the root cause of a major internet disruption last year.  One of the most damning discoveries was a particular firmware that had hard coded an administrator account in it.  This information was obtainable and since it was hard coded, any device of this make on the internet was vulnerable.  Let me put emphasis on this.  Hard coded means no one can override the setting, it is fixed and cannot be changed without a firmware update that does not have this information in it.  The other issue is the non flashable programing that is burnt on silicon chips.

Silicon chip manufacturing is the brain child of Robert Noyes in the late 1950’s.  He and a host of others, known as the “traitorous eight”, went on the revolutionize how electronic circuits are made.  Gone was the old world idea of discrete components.  Now these discrete components are printed, sprayed, etched, and formed on silicon wafers to make what we call the integrated circuit.

When an integrated circuit or silicon chip is created, the end result typically will not change.  These chips are factory programed.  The issue from this is a question of integrity.  If a chip is created with a bug or with intended backdoors, very little can be done to correct this.

The manufacturing process is considered intellectual property and is protected as such under law.  However, those outside of the law can surface grind the chips and examine the layers microscopically to reverse engineer its construction.  As a result, the public is unaware and the outlaws are informed.  Could a backdoor exist?

The idea of backdoors is not new.  Last year also saw the issue of encryption and the ability to circumvent it when law enforcement and Apple went at odds with each other.  Many security experts agree that creating a backdoor in encryption would require a master key.  Should the master key be misplaced, the entire mechanism that encryption was designed to be used for no longer works.

From its beginings, the development of encryption use for the general public was opposed.  It wasn’t until the transfer of money through electronic means.  Since these transactions would be carried out by the public sector, they would need the absolute certainty that the transactions were valid.  Encryption validates that communications between two points is unrecognizable by a third party.  It also ensures that the two points know for certain they are communicating with each other without a third party being able to spoof either of them.  Without encryption, the financial institutions would be vulnerable to electronic fraud.  Ultimately the policy on public encryption yielded and it became available.

Another form of validation is file checksum algorithm.  Digital data is unique.  If changed it will not be mathematically the same.  The checksum of a file can validate its change state.  This means data sent through non secure channels could be validated.  It seems reasonable that such a mechanism would exist for hardware, software, or AI that processes data and outputs an expected result.  Validation will be key to the integrity of the ever changing face of the internet and the big data it contains.

Comments are closed.