We’re seeing a progressive slowdown in the rate of hardware change in our smartphones these days as individual innovations comprise smaller incremental enhancements rather than huge step changes. Where the relative rate of change of a device has slowed the rate of change in the combined capabilities of the app’s and in the integration of devices across an intelligent and connected internet are accelerating at a breathtaking rate!
Every technology described here is connected to some degree and contributing to this sum of change, working to transform the way we work, and perhaps taking us one further step to a fully connected world and one where AI is far more sophisticated and sapient than it is today?
“The real risk with AI isn’t malice but competence, a super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” (2015 – Stephen Hawking)
The current belief is that Blockchain may initially prove its value in supply chain management and this is where firms like Cisco and IBM are evaluating the technology. The technology still has a bit of development to complete before it starts gain widespread adoption and research indicates it will be up to 10 years before blockchain becomes widespread in supply chain management, and between 5-25 years for financial services.
Companies already heavily invested in blockchain include HP, Microsoft, IBM, and Intel. In the financial-services sector participants include such influential banks as Citi, Bank of America, HSBC, Deutsche Bank, Morgan Stanley, UniCredit, Société Générale, Mitsubishi UFG Financial Group, National Australia Bank, and the Royal Bank of Canada. Another early experimenter is Nasdaq, uses blockchain-based digital ledger for transferring shares of privately held companies.
What is it?
Blockchains are a digitized, decentralized, public ledgers of cryptocurrency transactions. They constantly develop into ‘completed’ blocks which are recorded and added in chronological order. You can to track digital currency transactions without the need for any centralized record keeping. Each computer connected to the cryptocurrency network gets a copy of the blockchain which is downloaded automatically.
Blockchains were originally developed as the accounting method for Bitcoin and provide what’s known as distributed ledger technology (DLT) which appear in a variety of commercial applications today. Currently, the technology is primarily used to verify transactions within digital currencies although it’s possible to digitize, code and insert any document into the blockchain and this creates a permanent record that cannot be changed. The record’s authenticity can also be verified by the entire community using the blockchain instead of a single centralized authority.
Blockchain was first described in a paper published under the name Satoshi Nakamoto, in 2008 and described an “Internet of money,” comprised of a shared database, organized as a general ledger, in which each block of data would be encrypted. Buyers, sellers, and trades would all be kept secret from one another. Trust, the key to any trading system, would be automated. The software would be open source, available for anyone to download, to use, and to extend.
Bitcoin or other digital currencies are not saved in a file somewhere but are represented by transactions recorded in a blockchain which is akin to global ledger spread across a large P2P network which verifies and approves each Bitcoin transaction. Each blockchain is distributed and runs on computers provided by volunteers around the world. There is no central database and the blockchain is public, so anyone can view it at any time It is also is encrypted using both public and private keys to maintain virtual security.
The ability to effectively execute transactions without the presence of a central authority is regarded as one of the primary benefits of blockchain technology. It creates the promise that organizations will be able to transact business without being subject to third-party control.
Blockchain technology could transform any system where trading occurs, where trust is a critical factor and where protection from identity theft is critical. Some banks are already exploring how blockchain might change trading and settling, back-office operations, and investment and capital assets management. The technology could enable the processing of transactions with more efficiency, security, privacy, reliability, and speed.
Internet of Things
GE, Rolls Royce and Cisco are all highly active IoT businesses and the applications cover a wide spread of business challenges. Alphabet (Google’s owner) is rapidly evolving into a pure IoT company and it produces speakers that respond to command, facial recognition, thermostats and home security that integrate via a device such as a phone to connect to the world via the Google cloud. The estimates for future values are staggering, ‘BI Intelligence’ forecasts that $6 trillion will be spent on IoT solutions in the next five years alone!
What is it?
The phrase ‘Internet of Things’ was first coined by Kevin Ashton in 1999 and well before anything, except computers, were actually connected to the internet. In the last 20 years Kevin has been challenging businesses to imagine a world where the internet will permeate all aspects of people’s lives.
The Internet of Things (IoT) is comprised of devices including sensors, smartphones and domestic appliances that are connected. By adding a layer of processing over these conversations it is possible to collate, analyze it and drive decisions and actions. Soon all of our digital devices including microwaves, washing machines, fridges, very soon pretty much everything will be connected via the internet and with GPS data we will also be able to optimize and automate systems with a geographical awareness which is likely to add a whole new level of richness to marketing, augmented reality and all decision support systems.
The concept of incorporating sensor and intelligence into physical objects has been under discussion for the past 40 years, and before its IoT name was used to describe it but it was heavily constrained by the technology available. The introduction of RFID tags helped to start to make IoT practical as did high capacity wireless networks and the adoption of IPv6 which enabled enough IP addresses to cope with the number of devices that will be connected.
Most major automotive companies, including the very heavily marketed Tesla, are working on autonomous vehicles including Toyota (TM) who showed off self-driving cars at this year’s Consumer Electronics Show. Volkswagen (VLKAY) and Hyundai (HYMLF) are both working with Aurora Innovations, which was launched by a former Google engineer. Ford (F) teamed up with Lyft, as well as Domino’s (DPZ) and Postmates, to announce its self-driving car platform earlier this year.
One of the most active car firms is General Motors’ who own a self-driving car unit called ‘Cruise Automation’ which they acquired for $1 billion in 2016 and they start self-driving tests this year in New York. GM also owns 9% of Lyft, the ride-sharing company and has also invested in Uber, it’s hedging its bets and spreading investments to make sure GM gets to market first!
What is it?
In 1969, John McCarthy in an essay titled “Computer-Controlled Cars” described an “automatic chauffeur” capable of finding its way down a public road via a “television camera input that uses the same visual input available to the human driver” He stated that users should be able to enter a destination using a keyboard which would prompt the car to immediately drive them there. Additional commands allow users to change destination, stop at a rest room or restaurant, slow down, or speed up in the case of an emergency.
Over 20 years later Dean Pomerleau, a Carnegie Mellon researcher published a thesis describing how neural networks could enable self-driving vehicle to take in raw images from the road and output steering controls in real time. His Navlab self-driving car travelled 2,797 miles from Pittsburgh to California in a journey they called “No Hands Across America.”
Autonomous technologies cover a wide range and can be aligned to a scale of capabilities as follows:
Level 0: No automation. The driver controls steering, and speed (both acceleration and deceleration) at all times, with no assistance at all. This includes systems that only provide warnings to the driver without taking any action.
- Level 1: Limited driver assistance. This includes systems that can control steering and acceleration/deceleration under specific circumstances, but not both at the same time.
- Level 2: Driver-assist systems that control both steering and acceleration/deceleration. These systems shift some of the workload away from the human driver, but still require that person to be attentive at all times.
- Level 3: Vehicles that can drive themselves in certain situations, such as in traffic on divided highways. When in autonomous mode, human intervention is not needed. But a human driver must be ready to take over when the vehicle encounters a situation that exceeds its limits.
- Level 4: Vehicles that can drive themselves most of the time but may need a human driver to take over in certain situations.
- Level 5: Fully autonomous. Level 5 vehicles can drive themselves at all times, under all circumstances. They have no need for manual controls.
Autonomous cars use a combination of technologies to experience their world which includes radar, laser light, GPS, odometry, and computer vision. Advanced control systems interpret sensory information to identify navigation paths along with obstacles and signs.
The benefits of autonomous cars include reduced infrastructure costs through more efficient use of existing road networks and parking, lower energy consumption, less need for insurance, reduced costs and expanded access to mobility for all and moving us towards ‘Transportation as a Service’.
Fog or edge computing is being pushed heavily by a few of the leading IoT technology players, including Cisco, IBM, and Dell and one of its strongest supporting cases comes from the automobile industry. According to a report from ON World, there will be 300 million connected cars on the road by 2025. These vehicles will use a range of sensors and automated systems for everything from self-driving/ self-parking to infotainment and traffic & weather alerts. It wouldn’t be feasible to send the amount of data that these systems generate to the cloud.
Google developed cloud computing to save money as open-source software reduces costs by creating virtual operating systems that businesses can build together. Distributed computing enabled Google to build large scale computing capacity with low-cost, commodity computer chips, saving costs.
Online gaming businesses need their games to compute highly complex and immersive environments instantly which needs high performance graphics processing chips. As gamers want play with and against each other across huge geographies with low latency. Nvidia (NVDA), a market leader in fast graphics chips have doubled in value over the past year, finding new markets in cars, gaming, clouds and even in blockchain. Nvidia is now valued at more than 15 times its sales because of this cloud upgrade opportunity.
Another use case for fog computing is for IoT applications, such as the next generation smarter transportation network, known as V2V in the US, and the Car-To-Car Consortium in Europe. Dubbed the ‘Internet of Vehicles,’ each vehicle and traffic enforcement device is an IoT device which produces a stream of data and connects to the other vehicles as well as traffic signals and the streets themselves, with the promise of safer transportation for better collision avoidance with traffic that flows more smoothly.
Fog computing has also been applied in manufacturing in the IoT (Industrial Internet of Things). This allows connected manufacturing devices with sensors and cameras to gather and process data locally, rather than send all of this data to the cloud. Processing this data locally, in one wireless real-world model allowed for a 98% reduction in packets of data transmitted, while maintaining a 97% data accuracy, in a distributed data fog computing model. In addition, the energy savings are ideal for effective energy consumption, a crucial feature in the setting of battery powered devices.
What is it?
The term fog computing is associated closely with Cisco, who have registered the name ‘Cisco Fog Computing,’ the term, fog refers to the ‘clouds’ close to the ground. In 2015, an OpenFog Consortium was created with founding members ARM, Cisco, Dell, Intel, Microsoft and Princeton University and additional contributing members including GE, Hitachi and Foxconn.
Edge and Fog computing are very similar. Edge computing usually occurs directly on the devices to which IoT sensors are attached or on a gateway device. Fog computing moves the edge computing activities to processors that are connected to the LAN or LAN hardware itself and hence physically more distant from the sensors.
It uses the concept of a network fabric that stretches from the outer edges of where data is created to where it will eventually be stored, whether that’s in the cloud or in a customer’s data center. Fog is another layer of a distributed network environment. Public infrastructure as a service (IaaS) cloud vendors are high-level, global endpoints for data and the edge of the network is where data from IoT devices is generated.
Fog computing uses edge devices to carry out a substantial amount of computation, storage, communication locally and routed over the internet backbone and has input and output from the physical world known as transduction. Fog computing consists of Edge nodes directly performing physical input and output often to achieve sensor input, display output, or full closed loop process control. It can use smaller Edge Clouds referred to as ‘Cloudlets’ at the Edge or nearer to the Edge than centralized Clouds.
By adding processing data close to where it is created networks have lower latency and with less data to upload and which are accordingly more efficient. Data can also be processed with fog computing where no bandwidth is available. Fog computing is an intermediary between IoT devices and the cloud computing infrastructure that they connect to, as it is able to analyze, process data close to source and filtering what gets uploaded into the cloud.
Fog requires more than just upgrading data centers, notes Marty Puranik, CEO of Atlantic.Net, a cloud hosting company in Orlando, Florida. “By placing data collection points closer to end users,” say within a building, “corporations can enjoy cloud processing speeds on even the most complex data sets,” he says. Think of clouds themselves as mainframes, fast PCs as clients, and these new Nvidia devices as servers sitting between them. Fog systems thus link cloud systems to the world you’re living in as Google once linked it through questions typed with your fingers.
Artificial Intelligence is already pervasive in our lives and some of the more well know examples including the following:
- Siri: Apples voice-activated computer that we interact with on a daily basis. Siri will find information, give directions, add events to our calendars, send messages etc. Siri is a pseudo-intelligent digital personal assistant that uses machine-learning technology to better predict and understand our natural-language questions and requests.
- Alexa: Amazon’s assistant can browse the web for information, shop, schedule appointments, set alarms and many other things. It can also control a smart home.
- Tesla: Elon Musk has had an incredibly disruptive and healthy effect on the car industry and made battery powered cars both sexy and smart with predictive and self-driving features.
- Amazon.com: Behind the web façade is hugely powerful AI engine that uses predictive and learning algorithms to target and present what we’re interested in buying based on our online behavior. Future plans are for automated purchase and ship based on historical behavior.
- Netflix: AI designed to learn what kind of films might appeal to you based on reactions, buy/rent decisions and the kind of stock that you look at.
- Nest: The learning thermostat acquired by Google in January of 2014 for $3.2 billion. The Nest learning thermostat can be controlled by Alexa and uses behavioral algorithms to predictively learn from your heating and cooling needs, with a view to anticipating and adjusting temperature based on your own profile.
What is it?
Artificial intelligence (AI) is an area of computer science focused on the creation of intelligent machines that work and react like humans. Some of the capabilities that computers with artificial intelligence are designed for include:
- Speech recognition
- Problem solving
A range of core capabilities underpin AI including ‘Machine Learning’. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs. Learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.
‘Machine Perception’ is the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition. Robotics enable AI to physically interact with the real world through tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.
The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal, although the term can be applied to any machine that exhibits traits associated with a human mind, such as learning and solving problems.
When we think of artificial intelligence, we still tend to think of The Terminator or Ex Machina and our fear that AI and robots could someday wipe out humanity. Even tech icons like Bill Gates and Elon Musk occasionally peddle irrational tales and dire warnings.
But the biggest problem with AI today isn’t that it’s too smart. In fact, it’s too rudimentary to be much use in many cases. One of the dirty little secrets about AI and machine learning is that they are so bad at dealing with ambiguity, gray areas, and understanding the context of data that many companies are having to hire armies of human beings to do the data sorting, data cleansing, and data preparation that’s needed to feed the algorithms pristine data so that they can do their work.
From a business standpoint, AI is about automation, prescriptive analytics, business process automation, and driving radical efficiency. The market for AI solutions is expected to reach US $47 billion by 2020.
Automation is predicted to eliminate 6% of the jobs in the United States over the next five years including those of highly skilled, knowledge-based employees. A study by the University of Oxford predicts that accountants have a 95% chance of becoming obsolete and Deloitte estimates 39% of jobs in the legal sector will be automated in the near future.
Virtual reality has held so much promise for so long now, but it is still not main-street technology! Part of the challenge is in the physical elements that comprise the experience, in a nutshell not many people want to walk around with the huge head apparatus needed to deliver the experience. Applications that are being heavily associated with it include:
- Gaming: Probably the most obvious use! It offers an intense, immersive and impressive experience that elevates gaming to a whole new level.
- Watching films: Fully immersive VR movies where you can explore and look at scenes from different angles and pay attention to where you choose.
- ‘Visiting’ places: For example, tours of museums for people unable to get to the building and estate agents providing potential buyers ‘walk-round’ a virtual model of the property
- Surgery: Its safer to train surgeons to perfect techniques on things other than real humans using fully-interactive and accurately modelled specimen.
What is it?
Virtual reality is a term used to describe a three-dimensional, computer generated environment which can be explored and interacted with by a person. That person becomes part of this virtual world, or is immersed within this environment, and whilst there, is able to manipulate objects or perform a series of actions.
Virtual reality is implemented using computer technology and there are a range of systems that can be used for this purpose, such as headsets, omni-directional treadmills and special gloves. These are used to actually stimulate our senses together to create the illusion of reality.
This is really challenging as our senses and brains are finely developed to synchronize and mediate experience, any minor anomalies are very easily picked up. We use terms such as immersiveness and realism to describe the issues that divide convincing, or enjoyable, virtual reality experiences from jarring or unpleasant ones.
Virtual reality technology has both a compute and physical side as it needs to integrate closely with our own physiology. The human visual field is not the same as a video frame as we have up to 180 degrees of vision, including peripheral. When digital screens fail to perfectly align with that field of vision, refresh fast and map view with the vestibular system in your ears the conflict can cause motion sickness. When an implementation of virtual reality manages to get the combination of hardware, software and sensory synchronicity right it achieves something what termed ‘a sense of presence’ Where the subject really feels like they are fully immersed and part of the virtual environment.
One of the biggest obstacles to virtual reality achieving true scale is the creation of enough content to attract a wide swath of consumers. As the industry has learned, onboarding hard-core gamers will not be enough to sustain a long-term effort.
It has applications and is probably better in this respect than virtual reality as it can have a tangible purpose such as the Gatwick airport passenger app and more than 2,000 beacons throughout its two terminals which allow passengers to use the AR maps from their mobile phone to navigate through the airport. Ikea’s Place app was built using Apple’s ARKit technology and it allows you to scan your room and design the space by placing Ikea objects in the digital image of your room to create a new environment with the new products. The Dulux Visualizer helps you try out a shade of paint for your room before you buy. You use your smartphone camera to scan any room and can then virtually paint it with any color of the rainbow. AccuVein is a handheld device that can scan the vein network of a patient and lead to a 45% reduction in escalations. Surgeons can plan procedures and models can be made of tumors with diagnostic tools to model disease conditions.
What is it?
Augmented reality (AR) is not new and many people remember Arnold Schwarzenegger’s T-800 Terminator in James Cameron’s 1984 blockbuster where the Terminator’s vision was overlaid with streaming information about subjects, objects and objectives. Since then it has just struggled with experiments like Google Glass being viewed failures to capture enough attention. There was a brief re-launch in 2016 with the launch of Pokémon Go which overlaid the virtual game world with the real world albeit that too faded in time!
Unlike virtual reality, which requires you to inhabit an entirely virtual environment, augmented reality takes your existing natural environment and overlays virtual information on top of it. As both virtual and real worlds coexist users of augmented reality can experience a reality+ where virtual information is used as a tool to increase the richness of the view.
AR can be displayed on various devices including screens, glasses, handheld devices, mobile phones, head-mounted displays. It also involves technologies such as S.L.A.M. (simultaneous localization and mapping), depth tracking (sensor data calculating the distance to the objects), and the following components:
- Cameras and sensors: Collecting data about user’s interactions and sending it for processing. Cameras on devices are scanning the surroundings and with this info a device locates physical objects and generates 3D models.
- Processing: AR devices eventually should act like little computers. In the same manner, they require a CPU, a GPU, flash memory, RAM, Bluetooth/WIFI, a GPS, etc. to be able to measure speed, angle, direction, orientation in space, and so on.
- Projection: This refers to a miniature projector on AR headsets, which takes data from sensors and projects digital content (result of processing) onto a surface to view. In fact, the use of projections in AR has not been fully invented yet to use it in commercial products or services.
- Reflection: Some AR devices have mirrors to assist human eyes to view virtual images. The goal of these reflection paths is to perform a proper image alignment.
There are a number of types of AR:
- Marker-based: Use a special visual object and a camera to scan it. The AR device calculates the position and orientation of a marker to position the content.
- Location-based: Uses GPS, a compass, gyroscope and an accelerometer to provide data based on user’s location. This data then determines what AR content you find or get in a certain area.
- Projection-based: Projecting synthetic light to physical surfaces to allows interaction. These are the holograms we see some films like Star Wars. It detects user interaction with a projection by its alterations.
- Superimposition: Replaces the original view with an augmented view. Examples include the IKEA Catalog app that allows users to place virtual items of their furniture catalog in their rooms.
Devices that support Augmented include:
- Smartphones and tablets: From gaming and entertainment to business analytics, sports and social networking.
- Special devices: Such as a Head-Up Display (HUD) that sends information to a transparent display directly in a pilot’s visor view.
- Smart lenses: Manufacturers including Samsung and Sony are working on AR lenses
- Virtual retinal displays (VRD): Creating images by projecting laser light into the human eye.
Chatbots have certainly arrived and examples include a bot in China called Xiaoice that was built by Microsoft and now has over 20 million people talking to it!
One key driver for uptake is that there has been a shift from social media platforms onto messaging apps and bots will be how their users access services. For future developers if you want to build a business online you need to build it where the people are, and that place is now inside messenger apps!
What is it?
A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface. The service could be any number of things, ranging from functional to fun, and it could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.).
It uses a computer program to mimic human conversations in its natural format including text or spoken language using artificial intelligence techniques such as Natural Language Processing (NLP), image and video processing, and audio analysis. The most interesting feature of the bots is that they learn from the past interactions and become intelligent and smarter over the time. Chatbots works in two ways, rule based, and smart machine based. Rule based chatbots provide predefined responses from a data base, based on the keywords used for the search. Smart machine based chatbots inherit capabilities from Artificial Intelligence and Cognitive Computing and adapt their behavior based on the customer interactions.
Enterprise applications of messaging bots are set to have a tangible impact on the software space, as more companies invest in developing their own consumer-facing bots. Chatbots, at the most simplistic level, are front-end interfaces for companies to communicate with their customers. More advanced bots leverage artificial intelligence to provide enriching and interactive user experiences.