Saturday, April 6, 2013

Who is performing that action?

A common way to model the dynamics of a smart environment is to describe each device as a finite state machine (see automata theory) so that the overall state of the environment can be described as the composition of the properties describing the individual components permeating it.
The modeling is usually done by using knowledge representation techniques and basically consists in specifying the properties of interest and the axioms that define the effects of performing actions on the environment.
Therefore, actions can be distinguished in  endogenous or exogenous events: according to the source the event is originating from.
Intelligent agents, that according to their task can also be called controllers or decision makers, can collect this event data stream and consequently perform further actions, which are basically events. This mean that we can see any distributed community of digital devices as message-passing system in which the information unit is the event. Events can be directly generated by smart devices or detected using Non-Intrusive Appliance Load monitoring. However, in this exposition we did not consider yet the user, or better, the users, who are immersed in such a complex system. Well, with their daily life activities they also generate events, such as switching on lights or doing the laundry by starting the washing machine, for instance. 
The major strenght of smart environments, and ubiquitous computing in general, is the possibility to adapt to specific users by taking their preferences into account when carrying out their task.
Tailoring a ubiquitous service to a specific user requires naming the user to identify him as unambiguous source and consumer of events, so that he can be distinguished from his sister or wife. This might be done by forcing every user to carry a badge, a RFID or any other kind of means that might identify him when accessing the network. However, this is unlikely to happen and to really ensure identification in smart environments we have to opt for a less obtrusive solution. The large diffusion of smartphones might be the second choice. Since today nearly everyone has at least one smartphone we could use it to access all the smart devices, which implies higher flexibility and easier maintenance for applications built on those appliances. In addition, this would provide users with a uniform interface to access devices, as currently they expose features in different ways and require users a learning period before actually being ready to exploit them.
The possibility to access an interface-less ecosystem of digital appliances is, basically, centralizing what today is a distributed interface. A centralized interface agent has to be able to interpret user's commands using natural language processing, entertain him to guarantee his experience, and access the set of digital devices available to effectively act on the physical environment. To implement this vision, researchers have been testing various different modalities, such as spoken interfaces, which received remarkable attention in this sense. Another possibility is the use of relational agents, who offer multiple modalities to interact with the user, and provide the user with an actual appearance of the interacting entity.
http://www.chatbots.org/

In conclusion, identifying users when interacting with smart environments is crucial to collect events describing their behaviour, analyze the data stream and use inferred information to offer tailored services. Intelligent interfaces (including advances in affective computing) might help to the purpose.


No comments:

Post a Comment