False Dichotomy

A previous article suggested that the two ways to deal with potentially disruptive technologies are to either invest in all potentially disruptive technologies or to ignore all potentially disruptive technologies.

One of the most powerful ways to go wrong is the False Dichotomy, which is also known as false choice, black and white thinking, or either/or thinking. Typically this involves a situation which is presented as a binary choice of either A or B, with the implicit or explicit assumption that these are the only possible choices – there are no alternatives C, D, or E.

In real world situations this is almost never the case. There are always options, or at least variations on the two choices presented. In many cases the alternatives presented are the extreme positions, ignoring the many alternatives between them.

Further, in the case of disruptive innovation the best alternative may be neither A nor B but “kumquat” – something completely unexpected and entirely outside the range of alternatives being considered! A popular saying is “On a scale of 1 to 10, what is your favorite color in the alphabet?”. While these examples appear nonsensical, they illustrate the need to consider alternatives that may not be obvious – an approach often called thinking outside the box.

Two points are worth making: first, there is almost never The Right Answer – that is, a single correct answer that solves all problems and where any other answer is Wrong. Instead, there are a range of alternatives that can be made to work with various levels of effort and trade-offs. Part of product planning is to explore these alternatives and to determine the benefits, cost, and risk associated with each.

Second, interesting problems are invariably multi-variate. Instead of a single parameter that can be optimized, several interacting parameters must be considered. Any real world situation is going to involve a series of trade-offs, typically between capabilities, cost, investment, resources, integration, and time. Other factors include side effects and consequences: for example, one material being considered for a product may have all desired physical properties but be toxic.

Also important is understanding whether a constraint is an absolute constraint like the speed of light¹ or is flexible. In the example above, a toxic material might be used if it is carefully packaged.

Looking for The Right Answer can lead to ignoring acceptable solutions and approaches that can be made to work. A better approach is to consider multiple potential solutions, determine the strengths and weaknesses of each – including what can be done to address these weaknesses – and choose the best overall solution. Note that the best overall solution will often include elements from multiple approaches – even elements from both halves of the false dichotomy!

For product development the challenge is to understand customer needs well enough to provide a product that meets their needs at a price they are willing to pay. Note that the customer has the final word on what their needs are – if a feature is not wanted or used by a customer, that feature does not meet their needs. The ideal is a product that meets current customer needs and can be extended to meet future needs.

While product definition is often done informally, there are structured approaches that can be used. A powerful technique is Design Thinking.

Design Thinking uses a five step process of: defining (or redefining) the problem, needfinding and benchmarking, ideating, building, and testing. Design Thinking is a team based approach, best done with a multidisciplinary team bringing different knowledge, expertise, and viewpoints to the project.

Much of the power of Design Thinking comes from applying a structured methodology to complex problems – much more than a blog post is needed to really understand it, much less apply it. Fortunately there are many resources available, including books and courses. Both edX and Coursera offer courses on the subject, with edX even offering a five course “micromasters” program.

¹ This is something of a trick example. The speed of light in a vacuum can’t be exceeded. However light travels through other materials at different (slower) speeds. For example, light in a fibre optic cable is roughly 30% slower than in a vacuum. There is a lot of interesting work going on around quantum entanglement that may allow information exchange to exceed the speed of light – this would definitely be a disruptive technology! Thus this is actually an example of the importance of understanding your constraints and exploring novel options to possibly work around fixed constraints!

Posted in Product Development | Leave a comment

Unknowable Markets

Clayten Christensen is very clear about understanding the markets for disruptive technologies: “Markets that do not exist cannot be analyzed: Suppliers and customers must discover them together. Not only are the market applications for disruptive technologies unknown at the time of their development, they are unknowable.”


In the early stages of a disruptive technology you don’t know what it is, how it works, what is required to develop it, who will use it, what they will use it for, or how they will use it. Based on this you have to build a business plan, establish a return on investment that meets corporate thresholds, prepare a development plan and budget, get approval, obtain and assign resources, deliver on schedule, and meet sales forecasts. And, of course, the new technology is inferior to existing more mature technologies for most use cases.

Right. Easy. No problem!

The only things you know at the early stages are that the initial markets will be small and that your early beliefs and assumptions are almost certainly wrong. Just to make things even better, there is an excellent chance that any truly new technology won’t work out. When it does work it is likely to take significant time to mature – more time than most companies are willing to accept for their investments. Until the new technology matures it will be inferior to existing technologies for most applications. Welcome to the wonderful world of pioneering new product development!

Based on this, no reasonable person would want to be involved in developing a disruptive technology.

Fortunately we have unreasonable people! People with vision, passion, and the skills needed to go after things that haven’t been done before. People with the determination to continue even after setbacks and failure. People that believe “impossible” is just a word in the dictionary between “imposition” and “impost”.

The next challenge is that well run companies have systems designed to prevent investment in disruptive technologies. Well, they aren’t specifically designed to do that, but it is one of the side effects of planning systems.

There are always more potential projects than available resources. Well run companies have well developed systems to allocate and assign resources, most notably money and people, to the projects that have the highest payback. The best companies have systems that give priority to projects that take advantage of the core resources and competencies of the company and align with the largest and most profitable market segments.

These companies know their markets, their customers, their resources – and themselves. They have a laser focus on excellence in execution and predictability. A learning company will continuously improve their processes and products. They are well aware of the adage “it is easier to keep an existing customer than to capture a new customer”. They dedicate themselves to keeping their customers, making customer service a top priority and seeking more ways to deliver value to their customers.

“But” proclaim the believers in an unproven and immature technology, “this new technology will revolutionize our industry and put us out of business if our competitors have it and we don’t!” As shown in the Gartner Hype Cycle they are joined by analysts and the press in proclaiming how everything is changing and you have to fully commit now!

There are a couple of sayings to keep in mind: the noted economist Paul Samuelson observed “the stock market has predicted nine of the last five recessions.” Also “even a blind squirrel finds a nut once in a while.”

We know that disruptive innovation happens and that it can greatly impact even – or perhaps especially? – the best run companies. What can we do about it?

  1. Invest in every new potentially disruptive technology. This will divert critical resources from sustaining innovation in the short term and cause the company to become uncompetitive.
  2. Ignore disruptive technologies. This approach works – until it doesn’t.

Some industries experience disruptive changes every few years. Other industries go decades without disruptive changes. The risk is that disruptive changes build up slowly and then hit with such speed and impact that it is too late to respond when they do happen.

Fortunately this is a false dichotomy – there are choices between “everything” and “nothing”. The next article will begin to explore these alternatives.

Posted in Product Development | 1 Comment


The previous article introduced the concept of product lifecycles. Examining the lifecycle model leads to the conclusion that the most profitable approach is to focus on the majority markets and largely ignore the innovators. In fact this is valid – within limits!

Clayton Christensen addresses this in The Innovator’s Dilemma where he introduces two types of innovation: sustaining innovation, which is innovation directed at solving an existing problem, and disruptive innovation, which involves using new technology to initially create new markets and then to ultimately address mainstream markets.

The concept can be summarized as sustaining innovation is a problem looking for a solution, while disruptive innovation is a solution looking for a problem. For sustaining innovation you understand the problem that needs to be solved and the challenge is to solve it. You understand the market, the customers and their needs, alternative solutions, and competitors. You can perform valid market research, make financial projections, and apply existing resources, processes, and skills.

Christensen discovered that existing companies do very well with sustaining innovation. They can tackle extraordinarily complex and difficult technologies and apply them to meeting their customers needs. They can make large investments and overcome seemingly impossible challenges. As an old saying goes, understanding the problem is 80% of the solution.

On the other hand, Christensen also discovered that successful companies do a poor job of dealing with disruptive technologies. They tend to either ignore a new technology until a competitor has established a strong position or they fail to successfully develop and market products built on the new technologies.

What is going on here? Is the problem with sustaining innovation? Not at all – successful companies are built on continuous improvement. Companies that don’t continuously improve their products and processes will fall behind the companies that do. Unswerving dedication to customers is a hallmark of a great company. Attempts to challenge a successful company in an established market are expensive and usually unsucessful.

Making sense of this apparent contradiction needs several more concepts.

Customer Needs

There are several components to the model that Christensen proposes. A core concept is customer needs – specifically, how well a technology meets customer needs.

Capabilities vs. customer requirements of a technology

This chart is a different look at the innovators/majority market used by Moore in Crossing the Chasm. It shows a typical technology development curve where a new technology starts out being useful but not meeting all customer needs. The technology improves to the point where it meets and then ultimately exceeds customer needs.

Note that a “good enough” product can still be improved. It doesn’t meet all needs of all customers. Customer needs do continue to grow over time. The interesting case occurs when technology/performance improvement is growing faster than customer demands. When this occurs the customer focus moves from technology and performance to other factors such as convenience, reliability – and cost! Customers are unwilling to pay a premium for product capabilities that exceed their needs.

Technology Evolution

Christensen proposed that the evolution of technology shown in the customer needs chart follows an “S” curve. In the early stages investments in a new technology are largely speculative. This is fundamental research – experimentation to discover how to build the new technology and discovery of what it can do.

If the technology is viable an inflexion point is reached where incremental investments in the technology or product produce significant increases in performance or capabilities. This is typically where large market growth occurs.

As the technology matures each increment of investment produces smaller returns – you reach a point of diminishing returns for investments.

With successful products you have typically been moving up-market as the technology evolves – delivering more support to more demanding customers in a broader market. This requires – and delivers! – larger gross margins for the products and a larger organization with more overhead to meet the demands of large customers.

Improvements in a technology vs. investments

Following this model we have seen a scrappy startup with an exciting new technology growing into a successful and profitable mainstream company – the classic success story!

This leaves us with unanswered questions: First, how does the scrappy startup grow into a profitable company rather than becoming another failure? Second, how can an existing successful company deal with disruptive innovation?

Posted in Product Development | 1 Comment

Building Successful Products

Building a new product is hard. Building a successful new product is even harder. And building a profitable new product is the greatest challenge! To make things even more interesting, the fundamental customer requirements for a product change as the product and market mature. The very things that are required for success in an early stage product will hinder or even prevent success later on.

Markets, technologies and products go through a series of predictable stages. Understanding this evolution – and understanding what to do at each stage! – is vital for navigating the shoals of building a successful and profitable product.

Technology Adoption Lifecycle

In 1991 Geoffrey Moore revolutionized technology marketing with his seminal book Crossing the Chasm. This book changed the perception of market growth from a smooth curve to a curve with a large hole. The idea was that there were major differences between early adopters of a technology and mainstream users – and that this difference is so large that there is a chasm between these groups.

Key to the Chasm Model is the idea that early adopters have completely different requirements and expectations than mainstream users. These differences are so large that completely different approaches are required. The things that produce success when dealing with innovators and early adopters will fail with mainstream users. The things that mainstream users require are of little interest to innovators.

Just to make things interesting, new markets start with innovators – without innovators you have no starting point for developing and proving new technologies. Mainstream users, on the other hand, will not adopt new, unproven products and technologies.

Innovators are seeking competitive advantage. They want new capabilities that let them leapfrog several steps ahead of their competitors. They are willing to take risk. They want something that no-one else has, and they want it now. Their metrics are capabilities, features, and time to market. They are willing to accept the chance of failure to gain the chance of success. Innovators are willing to do a lot of work themselves and to accept point solutions.

The majority of the market, on the other hand, is seeking mature products. They expect things to work out of the box. They are looking for proven solutions that they can integrate into their existing environment. They want support, upgrades, and even documentation. They are not willing to accept significant risk.

Part of Moore’s argument is that a product or technology that is successful in the innovator and early adopter markets can create enough momentum to become a defacto standard in the mainstream markets – that success in the early markets has the potential to lock competitors out of the lucrative mainstream markets.

If you are prepared for the Chasm, your goal is to cross the Chasm, establish a beachhead in the majority market, and then build out from this beachhead to conquer the profitable majority markets.

Another possible outcome is for a competitor who is already established in the majority markets to keep a close eye on new entrants into the market. As long as they are in the Innovator and Early Adopter phases they can be monitored with no action taken. When someone successfully crosses the chasm and establishes a beachhead they are vulnerable – at this point a fast follower can swoop in, use their greater resources to address the needs of the majority markets, and take over just as the market becomes profitable.

The challenge for a fast follower is judging where the new entrants are. Move too early while the market is still Early Adopters and you waste resources on people that don’t care about your strengths. Going after a market entrant who hasn’t established a proven beachhead and started to move beyond it and you may waste resources on an unproven market. Or move too late, after a new entrant has established their beachhead and moved into majority markets, and you can find yourself facing an entrenched competitor with adequate resources and newer technology and products.

Of course, there is a lot more detail than this – see the book!

Gartner Hype Cycle

Exciting new technologies are exactly that – exciting! This excitement and inflated expectations for a new technology are often taken as proof of a large market just waiting for new products.

The Gartner Hype Cycle shows what actually happens. The Hype Cycle looks at customer expectations and perception as a technology is introduced and then matures over time.

The Gartner Hype Cycle is critical for understanding the difference between excitement and revenue. As a general rule, the more a new technology is covered in media, conferences, and even airline magazines, the lower the current real market opportunity.

To fully understand and appreciate the Hype Cycle, see Mastering the Hype Cycle: How to Choose the Right Innovation at the Right Time.

Bringing the Pieces Together

Now let’s bring these two models together:

This illustration shows that the greatest excitement around a new technology – the greatest hype – occurs before the profitable market emerges. The innovators represent about 2.5% of the total market and the early adopters another 13.5% – meaning that about 16% of the total market exists before the chasm.

This chart explains why innovation and excitement typically don’t directly lead to large revenues – there simply aren’t enough of the innovators and early adopters, and the majority markets will not accept early stage technology and immature products.


Posted in Product Development | 5 Comments

Where Did That Software Come From?

I have an article in the Oct. 9 issue of Military Embedded Systems magazine on software provenance titled Where Did That Software Come From?

Where did the software on your embedded system come from? Can you prove it? Can you safely update systems in the field? Cryptography provides the tools for verifying the integrity and provenance of software and data. There is a process as to how users can verify the source of software, if it was tampered with in transit, and if it was modified after installation.

The article explores how cryptography, especially hashing and code signing, can be use to establish the source and integrity. It examines how source code control systems and automated build systems are a key part of the software provenance story.  (Provenance means “a record of ownership of a work of art or an antique, used as a guide to authenticity or quality.” It is increasingly being applied to software.)

As an interesting side note, the article describes how the git version control system is very similar to a blockchain.

Posted in Security | Leave a comment

IoT Security for Developers [Survive IoT Part 5]

Previous articles focused on how to securely design and configure a system based on existing hardware, software, IoT Devices, and networks. If you are developing IoT devices, software, and systems, there is a lot more you can do to develop secure systems.

The first thing is to manage and secure communications with IoT Devices. Your software needs to be able to discover, configure, manage and communicate with IoT devices. By considering security implications when designing and implementing these functions you can make the system much more robust. The basic guideline is don’t trust any device. Have checks to verify that a device is what it claims to be, to verify device integrity, and to validate communications with the devices.

Have a special process for discovering and registering devices and restrict access to it. Do not automatically detect and register any device that pops up on the network! Have a mechanism for pairing devices with the gateway, such as a special pairing mode that must be invoked on both the device and the gateway to pair or a requirement to manually enter a device serial number or address into the gateway as part of the registration process. For industrial applications adding devices is a deliberate process – this is not a good operation to fully automate!

A solid approach to gateway and device identity is to have a certificate provisioned onto the device at the factory, by the system integrator, or at a central facility. It is even better if this certificate is backed by a HW root of trust that can’t be copied or spoofed.

Communications between the gateway and the device should be designed. Instead of a general network connection, which can be used for many purposes, consider using a specialized interface. Messaging interfaces are ideal for many IoT applications. Two of the most popular messaging interfaces are MQTT (Message Queued Telemetry Transport) and CoAP. In addition to their many other advantages, these messaging interfaces only carry IoT data, greatly reducing their capability to be used as an attack vector.

Message based interfaces are also a good approach for connecting the IoT Gateway to backend systems. An enterprise message bus like AMQP is a powerful tool for handling asynchronous inputs from thousands of gateways, routing them, and feeding the data into backend systems. A messaging system makes the total system more reliable, more robust, and more efficient – and makes it much easier to implement large scale systems! Messaging interfaces are ideal for handling exceptions – they allow you to simply send the exception as a regular message and have it properly processed and routed by business logic on the backend.

Messaging systems are also ideal for handling unreliable networks and heavy system loads. A messaging system will queue up messages until the network is available. If a sudden burst of activity causes the network and backend systems to be overloaded the messaging system will automatically queue up the messages and then release them for processing as resources become available. Messaging systems allow you to ensure reliable message delivery, which is critical for many applications. Best of all, messaging systems are easy for a programmer to use and do the hard work of building a robust communications capability for you.

No matter what type of interface you are using it is critical to sanitize your inputs. Never just pass through information from a device – instead, check it to make sure that is properly formatted, that it makes sense, that it does not contain a malicious payload, and that the data has not been corrupted. The overall integrity of an IoT system is greatly enhanced by ensuring the quality of the data it is operating on. Perhaps the best example of this is Little Bobby Tables from XKCD (XKCD.com):

Importance of sanitizing your input.

Importance of sanitizing your input.

On a more serious level, poor input sanitization is responsible for many security issues. Programmers should assume that users can’t be trusted and all interactions are a potential attack.

Posted in IoT, Security, System Management | Leave a comment

Security by Isolating Insecurity [Survive IoT Part 4]

In my previous post I introduced “Goldilocks Security”, proposing three approaches to security.

Solution 1: Ignore Security

Safety in the crowd – with tens of millions of cameras out there, why would anyone pick mine? Odds are that the bad guys won’t pick yours – they will pick all of them! Automated search and penetration tools easily find millions of IP cameras. You will be lost in the crowd – the crowd of bots!

Solution 2: Secure the Cameras

For home and small business customers, a secure the camera approach simply won’t work because ease of use wins out over effective security in product design and because the camera vendors’ business model (low-cost, ease of use, and access over the Internet) all conspire against security. What’s left?

Solution 3: Isolation

If the IP cameras can’t be safely placed on the Internet, then isolate them from the Internet.

To do this, introduce an IoT Gateway between the cameras and all other systems. This IoT Gateway would have two network interfaces: one network interface dedicated to the cameras and the second network interface used to connect to the outside world. An application running on the IoT Gateway would talk to the IP cameras and then talk to the outside world (if needed). There would be no network connection between the IP cameras and anything other than the IoT Gateway application. The IoT Gateway would also be hardened and actively managed for best security.

How is this implemented?

  • Put the IP cameras on a dedicated network. This should be a separate physical network. At a minimum it should be a VLAN (Virtual LAN). There will typically be a relatively small number of IP cameras in use, so a dedicated network switch, probably with PoE, is cost effective.
    • Use static IP addresses. If the IP cameras are assigned static IP addresses, there is no need to have an IP gateway or DNS server on the network segment. This further reduces the ability of the IP cameras to get out on the network. You lose the convenience of DHCP assigned address and gain significant security.
    • You can have multiple separate networks. For example, you might have one for external cameras, one for cameras in interior public spaces, one for manufacturing space and one for labs. With this configuration, someone gaining access to the exterior network would not be able to gain access to the lab cameras.
  • Add an IoT Gateway – a computer with a network interface connected to the camera network. In the example above, the gateway would have four network interfaces – one for each camera network. The IoT Gateway would probably also be connected to the corporate network; this would require a fifth network interface. Note that you can have multiple IoT Gateways, such as one for each camera network, one for a building management system, one for other security systems, and one that connects an entire building or campus to the Internet.
  • Use a video monitoring program such as ZoneMinder or a commercial program to receive, monitor and display the video data. Such a program can monitor multiple camera feeds, analyze the video feeds for things such as motion detection, record multiple video streams, and create events and alerts. These events and alerts can do things like trigger alarms, send emails, send text messages, or trigger other business rules. Note that the video monitoring program further isolates the cameras from the Internet – the cameras talk to the video monitoring program and the video monitoring program talks to the outside world.
  • Sandbox the video monitoring program using tools like SELinux and containers. These both protect the application and protect the rest of the system from the application – even if the application is compromised, it won’t be able to attack the rest of the system.
  • Remove any unneeded services from the IoT Gateway. This is a dedicated device performing a small set of tasks. There shouldn’t be any software on the system that is not needed to perform these tasks – no development tools, no extraneous programs, no unneeded services running.
  • Run the video monitoring program with minimal privileges. This program should not require root level access.
  • Configure strong firewall settings on the IoT Gateway. Only allow required communications. For example, only allow communications with specific IP addresses or mac addresses (the IP cameras configured into the system) over specific ports using specific protocols. You can also configure the firewall to only allow specific applications access to the network port. These settings would keep anything other than authorized cameras from accessing the gateway and keep the authorized cameras from talking to anything other than the video monitoring application. This approach also protects the cameras. Anyone attempting to attack the cameras from the Internet would need to penetrate the IoT Gateway and then change settings such as the firewall and SELinux before they could get to the cameras.
  • Use strong access controls. Multi-factor authentication is a really good idea. Of course you have a separate account for each user, and assign each user the minimum privilege they need to do their job. Most of the time you don’t need to be logged in to the system – most video monitoring applications can display on the lock screen, allowing visual monitoring of the video streams without being able to change the system. For remote gateways interactive access isn’t needed at all; they simply process sensor data and send it to a remote system.
  • Other systems should be able to verify the identity of the IoT Gateway. A common way to do this is to install a certificate on the gateway. Each gateway should have a unique certificate, which can be provided by systems like Linux IdM or MS Active Directory. Even greater security can be provided by placing the system identity into a hardware root of trust like a TPM (Trusted Processing Module), which prevents the identity from being copied, cloned, or spoofed.
  • Encrypted communications is always a good idea for security. Encryption protects the contents of the video stream from being revealed, prevents the contents of the video stream from being modified or spoofed, and verifies the integrity of the video stream – any modifications of the encrypted traffic, either deliberate or due to network error, are detected. Further, if you configure a VPN (Virtual Private Network) between the IoT Gateway and backend systems you can force all network traffic through the VPN, thus preventing network attacks against the IoT Gateway. For security systems it is good practice to encrypt all traffic, both internal and external.
  • Proactively manage the IoT Gateway. Regularly update it to get the latest security patches and bug fixes. Scan it regularly with tools like OpenSCAP to maintain secure configuration. Monitor logfiles for anomalies that might be related to security events, hardware issues, or software issues.

You can see how a properly configured IoT Gateway can allow you to use insecure IoT devices as part of a secure system. This approach isn’t perfect – the cameras should also be managed like the gateway – but it is a viable approach to building a reasonably secure and robust system out of insecure devices.

One issue is that the cameras are not protected from local attack. If WiFi is used the attacker only needs to be nearby. If Ethernet is used an attacker can add another device to the network. This is difficult as you would need to gain access to the network switch and find a live port on the proper network. Attacking the Ethernet cable leaves signs, including network glitches. Physically attacking a camera also leaves signs. All of this can be done, but is more challenging than a network based attack over the Internet and can be managed through physical security and good network monitoring. These are some of the reasons why I strongly prefer wired network connections over wireless network connections.

Posted in IoT, Security, System Management | Leave a comment