Assuming one could momentarily step aside from the current pandemonium generated by big data, social networks, and smart phones; there is an obvious question that comes to one's mind: what will be the next wave of innovations? Of course, the only safe prediction, about forecasting the future, is that it is very easy to get it wrong. Who would have thought that, 20 years after the flop of the Newton (http://www.youtube.com/watch?v=MiNKMmyRiw4), we would have been queuing to buy an iPad? Sometimes, you just have to wait for the right Apple to fall on your head. Still, there are a couple of low-hanging fruits that seem to be sufficiently ripe to be worth monitoring.

Some pundits have been talking about “the internet of things” for some years now (Atzori et al. (2010)). According to this view, the next revolution will not be the vertical development of some unchartered new technology, but horizontal. For it will be about connecting anything to anything, or a2a, not just humans to humans. They have a point. One day, you-name-it 2.0 will be passé, and we might be thrilled by a2a technologies. Even now, the fact that the Newton was advertised as being able to connect to a printer sounds quite amazing. Imagine a world in which your car autonomously checks your electronic diary and reminds you, through your digital TV, that you need to get some petrol tomorrow, before your long-distance commuting. All this and more is already feasible. The greatest obstacles are a lack of shared standards, limited protocols, and hardware that is not designed to be fully modular with the rest of the infosphere. It is a problem of integration and de-fragmentation, which we routinely solve by forcing humans to work like interfaces. We connect the printer to the computer, we translate the GPS's instructions into driving manoeuvres, and we make the fridge talk to the grocery supermarket. Essentially, the internet of things is about getting rid of us, the cumbersome humans in the loop. In a defragmented and fully integrated infosphere, the invisible coordination between gadgets will be as seamless as the way in which your iPhone interacts with your iMac.

According to a recent white paper published by CISCO IBSG (Evans (2011)), a multinational corporation that admittedly designs, manufactures, and sells networking equipment, there will be 25 billion devices connected to the Internet by 2015 and 50 billion by 2020 (see Fig. 1).

Fig. 1
figure 1

The growth of world population and of connected devices. Source: Evans (2011). F = forecast

The number of connected devices per person will grow from 0.08 in 2003, to 1.84 in 2010, to 3.47 in 2015, to 6.58 in 2020. To an extraterrestrial, global communication on earth will soon appear to be largely a non-human phenomenon, as Fig. 2 illustrates.

Fig. 2
figure 2

The total space of connectivity in relation to the growth of world population and of connected devices. Source: Evans (2011). F = forecast

The second fruit is affective computing (Picard (2000)). This is an even older prediction whose time seems to have come. Computerised artefacts (artificial agents or AAs) not only have problems talking to each other, they also disregard their masters' feelings. When we were punching cards, this was hardly an issue. But at least since the early 1990s, a branch of AI has begun to study how AAs might be able to deal with human emotions through smart interfaces. Two fundamental, philosophical questions underpin this research programme:

  1. (a)

    whether AAs might (or even ought) to be able to recognise human emotions and respond to them adequately; and

  2. (b)

    whether AAs themselves might (or even ought) to be provided with (the capacity to develop some) emotions.

Question (a) is addressed by research in Human–Computer Interaction (HCI). Users' physiological conditions and behavioural patterns may be indicative of their emotional state, and developing AAs able to exploit such data in order to actuate adequate responsive strategies seems like a good and feasible idea. Today, affective computing can already prevent nasty and regrettable emails, reduce driving mistakes, encourage healthy habits, offer dietary advice, or indicate better consumers' options. The reader may recall a distant ancestor of this sort of HCI, Microsoft's infamous Office Assistant, known as Clippy. It was meant to assist users but turned out to be a nuisance and was discontinued in 2003. I am not sure I would enjoy a toaster that patronises me (“Luciano, shouldn't you have an apple instead?”), but I am ready to concede that some advantages might be worth hurting my feelings.

The real hype in affective computing concerns question (b). Here, the most extraordinary claims are made, often unsubstantiated by our current understanding of computer science and our limited knowledge about animal emotions. Simplifying a lot but not too much, the reasoning is that we are good at intelligent tasks because we are also emotionally involved with them, so real AI will be achievable only if some “emotional intelligence” can be developed. I hope this sounds like a modus tollens to you as well, but even if it is not, the premise that intelligence requires emotion seems to be in need of some serious justification. Vague evolutionary references and the usual anti-Cartesianism de rigueur are messy and confusing. There is plenty of very intelligent animals that flourish without any ostensible reliance on emotions or feelings of any kind. Crocodiles don't cry and ants do not get annoyed with cicadas. A hot computer is one with a broken cooling system.

It is hard to forecast what will happen when things start talking to each other, but I would not be surprised if Apple will design white goods in the near future. At the same time, I hope our gadgets won't be too emotional when we shall finally stop pampering them, as we have been forced to do for decades. It's high time for ICT to grow up and move out of our mental space. Being left alone: could this be the next big wave?