M2M/IoT Application Platforms
The next platform category that we are going to look at are platforms that enable the development and operation of basic M2M solutions and more advanced IoT mashups. M2M solutions typically focus on basic remote asset monitoring and management, while IoT mashups add more semantically rich application logic. We have explicitly excluded very high throughput, hard real-time-type data acquisition solutions. These types of solutions are described in their own category (Industrial Data Acquisition Platforms) in the next section.
The main goal of M2M/IoT application platforms is to streamline the development process and provide as much out-of-the-box functionality as possible in order to make application development and maintenance as efficient as possible. The previous figure describes the key elements of such a platform:
- The core M2M/IoT application platform in our definitions consists of a backend plus different technologies that enable asset integration such as agents, libraries, and interfaces. In our standard AIA, these are the two middle tiers that together, are always called IoT cloud/M2M.
- Asset interface definitions: Ideally, the platform will provide a consistent way of describing all functional interfaces of the assets and devices in an abstract format that can be used in all tiers of the AIA, from device integration to application development in the backend.
- The backend, which usually contains a central database or repository for managing all asset-related data, as well as a set of services that help to manage the distributed assets.
- A set of asset integration technologies, including sophisticated agent technologies, basic libraries for less powerful hardware as well as remote interfaces for direct integration of assets.
- Support for a set of protocols, such as MQTT, CoAP, XMPP, and many other protocols that enable remote communication between the asset and the backend.
- IoT application development and mashup capabilities that leverage the data and services of the backend.
Asset and Device Interface Management
As described by Roman Wambacher in the Recommendations section of the chapter on gateways and sensor networks, managing heterogeneity is probably one of the biggest challenges for any company that has large numbers of assets of different types and versions deployed in the field. In order to address this problem, most platforms support some form of standard for defining and managing asset and device interfaces. This is also important for the platforms themselves, because a standardized mechanism for the definition of asset and device interfaces can be used to ensure that all elements in the platform are functioning properly together, from asset integration and asset data persistence, through to asset-related UIs.
The previous figure shows the main elements that should be supported by a generic device and asset model:
- Assets and devices should be modeled explicitly: It should be possible to define aggregation hierarchies (multiple devices per asset, asset groups, etc.). In the previous example, a “managed entity” is used to describe the common characteristics of assets and devices.
- The mobile location of a managed entity can be used to store data about the current location. The time stamp is used to indicate when this was last updated. This is important for mobile equipment. In the backend, the mobile location information can be stored as a time series, allowing the movement of mobile assets to be traced.
- Events can be submitted by assets or devices, in order to indicate an error situation, for example. Again, they are time stamped to indicate when the event was submitted. Again, this should be captured as a time series in the backend to ensure full traceability.
- Multiple properties can be defined individually, such as temperature, pressure, etc.
- Many assets require efficient management of configuration data and other files.
- Many assets will also support different operations that can be triggered remotely from the backend; for example, “increase temperature,” “increase pressure,” “shut down,” “restart,” etc.
- Finally, the model also needs to reflect user access rights, which are often role-based. For example, only the super user role might have the permission to perform a restart.
There are a number of initiatives underway to define standards for the definition of interfaces to assets and devices in the IoT. For example, the IPSO Alliance has defined the specification of Smart Objects for interoperability between devices in the IoT. Or take OSGi RFC 196, which aims to define a device abstraction layer for Java-powered gateways running OSGi. Another interesting example is the Vorto project, recently initiated by Bosch Software Innovations at the Eclipse Foundation. Vorto aims to define the meta information model for devices and assets, support for code generators, an open source tool-set to create and manage interfaces, and an information model repository to store and manage these.
The whole area of higher level interface definitions will most likely be a key factor in the success of interoperability in the IoT, and as such, will also play an important role in the success of the IoT overall. As usual, we don`t expect the world to agree on one unified standard, however agreement on a small number of widely established standards in different vertical application domains would be helpful.
M2M Backend/IoT Cloud
The M2M backend (often re-branded as the IoT cloud today) usually provides central management of asset-related data, as well as a set of generic support functions that allow assets to be managed and monitored in the field. Key functions include:
- Asset database or repository: Stores asset definition and configuration data, as well as status information and time series data (for example, event history, metering data, etc.). The database schema should be generic and allow support for many different versions of the generic asset interface definitions, as discussed in the previous chapter.
- Asset monitoring and management UI: A generic UI for administrators that provides an overview of all registered assets, including asset health and history. The asset admin UI will read data from the local asset database or repository, but will also allow users to update values by getting new readings directly from the asset. Also, if the asset supports operations (see discussion on asset interfaces in the previous section), the UI will list all available operations and allow the user to dynamically invoke them.
- Reporting and dashboards: Basic descriptive analytics functions will be provided in an – ideally – customizable dashboard, for example, average machine health, etc.
- Alarm management: The system should ideally provide a scripting or business rules-based mechanism that allows the definition of actions for certain events or alarms. For example, a rule could define that a text message is sent to an administrator if a machine takes longer than 15 minutes to respond to a regular ping message.
- Remote access: Many assets will provide some form of remote log in mechanism for on-asset diagnostics. The platform should support remote diagnostics tools and firewall-friendly remote access.
- Content distribution: Many assets require regular updates of content such as configuration files, test data, documentation, and digital marketing collateral that is to be displayed locally.
- Software distribution: The platform must be able to support secure and efficient management and distribution of firmware, operating system updates, and application logic.
- Security management: Includes certificate management, as well as user and role management (or integration with external systems), and management of permission assignments between users (or roles) and assets (or specific data or functions on the assets).
- Logging and tracing: All actions must be fully logged in an efficient and transparent manner.
- Automation: The platform should provide support for automation of tasks, such as through the support of APIs and scripting engines. For example, instead of manually checking the right version of a certain configuration file on ten thousand remote assets, a simple script should be able to automate this task.
Asset Integration
There are many different strategies for integrating the backend with assets, but the main categories include:
- Agent-based integration (usually in combination with a relatively powerful gateway): Allows for deployment of sophisticated integration and business logic on the asset.
- Local libraries: In case only less powerful hardware is available on the asset, many platforms will provide libraries (in c/C++ or JavaScript, for example) that allow for highly optimized, custom integration of the backend. These libraries will understand the remote interfaces supported by the backend, and will also support the local interfaces expected by the backend. However, they will often be much simpler and will only support a subset of the interfaces.
- Interface-based integration: In some cases, it will not be possible or will not make sense to use one of the two approaches described previously. If the backend supports a set of well-documented online interfaces based on open standards, then it will also be possible to integrate assets via other means.
With the ever-decreasing cost of hardware and an ever-increasing need for more services on the assets, the agent-based approach would seem to have great potential in the IoT. Smart homes was one area in which there was early adoption of on-asset agent technology for integration purposes but also for the provision of local business logic. Another area where an agent-based approach could be of interest in the future is the automotive industry. See our discussion about open car app platforms in the Connected Vehicle chapter.
Key features that this type of agent software would need to support include:
- Application sandbox: A secure environment to execute local applications in such a way that they don`t interfere with each other or with the environment
- Device abstraction layer: Must provide support for integration with local devices, mapping to the abstract interface definitions, and support for the interfaces required by the backend to read asset status data, to execute operations on the asset, etc.
- Management agent as a counterpart for the software distribution mechanism on the backend
- Support for key protocols, both for different device types and local wireless communication, as well as for backend communication
- User and role management, compatible with the same functions on the backend
- Local data collection and filtering (see the discussion on fog computing in the Gateways and sensor networks chapter, or the section on complex event processing in the Data Management chapter)
- Local automation, similar to automation on the backend
Probably the most advanced open standard out there for agent technology is OSGi. OSGi offers very sophisticated support for application isolation, resource allocation, application lifecycle management, application dependency management, and so on.
IoT Applications and Mashups
So far, we have mainly been focusing on features that are required to integrate with assets, remotely monitor assets, and react to events received from assets. However, we have not spoken about how to leverage these capabilities in order to actually build those semantically rich, new applications that will differentiate Enterprise IoT from basic M2M. Fortunately, most platforms will, by default, provide a set of open interfaces and APIs that can be used to build such applications. There are also products becoming available that promise rapid development of IoT applications and mashups by integrating with the basic M2M platforms (or the “IoT cloud,” depending on the terminology used).
Rick Bullotta, CTO of ThingWorx, feels very passionate about this, as can be seen from the following interview.
Dirk Slama: Rick, how do you see the future of application enablement in the IoT?
Rick Bullotta: We see this as two distinct tiers: the device/machine cloud and the application enablement layer. We find that many of our large global customers already have connected devices – you’d be amazed how many devices out there are connected via dial-in modems and other techniques. We have designed our platform in such a way that the application enablement layer can work with other device clouds. It’s really a matter of supporting customer choice and the reality that they might be using different technologies in different business units. While we also offer a world-class device cloud of our own, we also chose to “play well with others”. An application platform must also embrace the systems and people involved, as well as the machines.
Dirk Slama: So, the IoT is not just about machines…
Rick Bullotta: Any IoT application that is doing anything interesting is integrating with lots of other lines of business applications, as well as integrating with human elements. This is an important capability to have for the application enablement platform: bring in assets such as your ERP system, CRM system, field service management, weather data, and energy pricing and make that composable functionality available at an application platform level. Composition can be applied to user interface, analytics, and business processes.
Dirk Slama: Let’s talk a little bit about strengths and potential limitations of this approach
Rick Bullotta: Let me give you an example: I want a consumer device that controls a smart vacuum in my home. That’s a very focused, linear application. Not a lot of complexity. Many companies are developing their own end-to-end technologies where they use standard development tools to create the applications. Where the traditional development approaches do not work, in my opinion, is the moment there is any complexity, or integration with other systems or data. In particular, if the application is to be dynamic in nature, if you want to add new services and capabilities, want to be able to tailor it to different customers and use cases, or if you want to have a very iterative development approach. You need a different model for developing applications. The number one thing to note is: If it’s a static app that’s not going to change very often, you have lots of choice in what you can use. If it’s a very dynamic application, or a business model that’s constantly adding new capabilities, an environment like ThingWorx provides substantial initial and ongoing advantages in terms of productivity. We’ve found that the vast majority of IoT applications fit those criteria.
Dirk: So to conclude, what are the key enablers for IoT platforms?
Rick Bullotta: You need to start with powerful out-of-the-box functionality from device to cloud. You need to enable others to enhance and extend your platform. Let others innovate. Let others create innovative solutions. But let them leverage our development environment, transport, security, and the high-level services we provide. Let people plug in their own algorithms, business logic, connectors, and user interface components. It’s not an either-or situation. If people prefer to use our software development kits to integrate their unique intellectual property, we let them do it in a sustainable way and in a way that enables rapid composition, as we discussed earlier.
Integrating Subnets of Things
As we discussed in the introduction, many Enterprise IoT solutions will initially focus on a relativey well defined, often closed ecosystem, or Subnet of Things (SoTs). Integration within and between SoTs will become one of the key challenges for the evolution of the IoT.
There are many companies addressing this on different levels. One of them is woit.io, a New York based start up that aims to assist with interconnection between devices, data services and owners of data in order to unlock business opportunity. Whilst the company does not actually provide data services, it aims to streamline the technical, legal and business processes involved in end-users availing of third party data services. From a customer perspective, wot.io aims to ‘look like’ a salesforce.com for the IoT. From a technical back end perspective, it is more analogous to the Object Management Group’s DDS standards (see below for more detail). The main value that the company adds is in the development and maintenance of the legal and business frameworks and agreements that are needed to bridge the gap between that salesforce.com shop window and a supporting technical integration with data services partners. Put another way, the company competes on the basis of its’ business facilitation capabilities, whilst underpinning technical capabilities are a qualifier for competing in the marketplace.
The following interview with Allen Proithis, President & Founder of wot.io explores this interesting market positioning in more detail.
Jim Morrish: wot.io is somewhat of a new player in the IoT space. Can you summarize why you founded the company, and characterize the opportunity that you see?
Allen Proithis: Almost everyone who wants to participate in the incredible opportunities around IoT is struggling at some level. This struggle to fully realize IoT driven products, services and cost efficiencies is caused by the number of participants it takes to create value, and by the technical, business & legal friction that needs to be addressed to define a successful, working relationship between these players. We remove much of that friction.
Jim Morrish: Can you give me some examples?
Allen Proithis: We often find that end customers struggle with who to call for an IoT solution, which vendors to choose, and how to justify the massive custom integration effort including consideration of risks, costs and time. Companies that add value to IoT data struggle to access the market thanks to massive market fragmentation, complex integration requirements and the challenge of maintaining relationships with other companies required to add different value to the same data. And systems integrators attempting to sell professional services are losing deals where the majority of the work is custom. Successful SIs will need to focus on where they can add the most value and will partner for pieces of the solution that can be productized.
Jim Morrish: And you don’t think that standardization, or the simple fact that many IoT players are already building those interfaces is going to solve these problems anytime soon?
Allen Proithis: It is generally recognized that the IoT is going to be huge in terms of both the impact that it will have on our daily lives and also new commercial opportunities. What is not nearly so well recognised is that the future IoT is going to be somewhat ‘lumpy’. To put a little more flesh on those bones, we think that the next stage of development of the IoT will be driven by common data standards, association with common providers of data services, common ownership of data sources, or common cause amongst the owners of data, and will be characterised by relatively tightly integrated islands of connected devices, which we term ’Subnets of Things’. The connections and interfaces between these islands of connected devices can be expected to develop significantly more slowly than connections and interfaces withinthese islands. That’s where wot.io comes in. We can quickly connect, for example users of ARM’s mbed platform, or the ARM mbed Subnet of Things, with Rackspace, or scale DB or Stream Technologies.
Jim Morrish: But Object Management Group is doing something pretty similar with their Data Distribution Service (DDS) standards, right?
Allen Proithis: Real Time Publish-Subscribe (RTPS) DDS would be analogous to the wot.io message bus and adapter framework. Just as we create adapters for other Pub/Sub streaming protocols; AMQP, MQTT, XMPP, Kafka, even cloud services like Pubnub, we will create a transport adapter for RTPS DDS and bridge the message routing worlds. But message routing is not what wot.io is about, message routing is a sub-service to make things work… wot.io is a data service exchange for connected device platforms. Data services are integrated applications that operate on data from connected device platforms… and yes we use a pubsub SOA to make it happen.
Jim Morrish: Ok, that sounds like Rackspace marketplace?
Allen Proithis: Yes, Rackspace offers hosted applications, but they are not integrated into data services that can operate on and add value to data from connected device platforms without significant engineering. wot.io and Rackspace are partners. Not only can we deploy our data service to the Rackspace infrastructure, we can also integrate services from the Rackspace marketplace.
Jim Morrish: So what’s the wot.io proposition, in a nutshell?
Allen Proithis: wot.io is a data service exchange for connected device platforms. We help clients meet the challenge of rapidly and flexibly extracting business value from connected data. Our solution is independent of individual technologies and compliments existing vendor platforms for organizations already operating in the Internet of Things and Machine to Machine industry. The fact is that many of those Subnets of Things that I described earlier could benefit from being connected together somehow, and it makes sense for a small number of market participants, like wot.io, to focus on making those connections, rather than have each of the individual Subnets of Things build their own bilateral relationships. There’s a lot of scale benefits to be had from this approach. But, in truth, establishing those connections is the first step towards a wider concept of liquidity in the provision of data services. This is why we characterise wot.io as a Data Services Exchange, or DSE.
Jim Morrish: You mean decreasing the friction associated with connecting together providers of data services with potential consumers of those services?
Allen Proithis: Correct, having built all those connections to those Subnets of Things, a Data Services Exchange is ideally positioned to offer clients access to a range of services provided by partners that are already integrated into the DSEs ecosystem. For instance, a DSE might offer access to Volt, Hadoop, Cassandra, SAP or MongoDB database services, or even some hybrid combination of these, all in an essentially pre-integrated and off-the-shelf commercial package. And from the data service supplier’s perspective, applications are at the core of the IoT opportunity. Every connected device must have an associated application, possibly several, and the development of those applications and the provision of supporting capabilities – such as, for example, data analytics, data mining and other data services – represent real commercial opportunities for a range of players. A DSE helps those specialized and differentiated providers connect to potential customers.
Jim Morrish: And you think that this freer exchange and interconnection of more differentiated services is what will characterise the IoT in the coming years?
Allen Proithis: We expect that this more horizontal perspective on M2M markets will become a dominant theme. Up until now the M2M market has been dominated by industry behemoths. As Tier 2 and other smaller players enter the market they will naturally look for ways to differentiate by developing specific capabilities. The mass market phase of M2M and IoT adoption will be characterised by a more differentiated ‘horizontal first’ approach.
Jim Morrish: And that’s the dynamic that you are looking to support?
Allen Proithis: Exactly, the wot.io data service exchange is a marketplace of integrated third party data services. By already integrated, we mean that the developer does not need to focus on the technical, business or legal aspects of integrating with new data services and can hit the ground running. In general, we believe that data service exchanges are the entities that will provide the underpinnings required so that the future M2M and IoT markets can function. Entities like wot.io will allow differentiated and specialised providers to easily ‘plug-in’ to larger and less differentiated service providers, and vice versa. This will usher in a phase of development of the IoT that is characterised by the establishment of an ecosystem of differentiated data service and platform players. Ultimately, products are better than services in the IoT market, and the market as a whole will be strengthened when participants play to their strengths.
Open Source
In addition to the many different commercial platforms available in this space, there is also a lot happening in the open source community. To take just one example, have a look at the following figure which provides an overview of the different IoT projects that are currently active at the Eclipse Foundation (recall that the Ignite | IoT Methodology described in this book is now also an official open source project, hosted by the Eclipse Foundation). The Eclipse Vorto project was established to manage interface definitions for the IoT. For embedded development, Eclipse aims to provide open source development tools for c, C++, and Lua. The Kura device gateway is an exciting open-source hardware initiative. On the protocol level, Eclipse already supports, or plans to support, MQTT, CoAP, OMA LWM2M, and ETSI M2M. Finally, Eclipse plans to support open-source tools for server development.