sabato 20 maggio 2017

Smart Microgrids Munich - Inaugural meetup

Dear visitor,
we are organizing a meetup on the topic of Smart Microgrids here in Munich. Our inaugural event is going to take place next May 31st at the Google Munich offices.



We will have talks from industrial and academic partners sharing their experiences, here is the program:

18:45 Welcome

19:00 - 19:15 Thomas Chrometzka, Director of GIZ Thailand
Impulse Talk on Thai islands agile electrification project and potential of smarter microgrids
Thomas Chrometzka is a renewable energy enthusiast currently the Director Renewable Energy with GIZ Thailand. His team works with the Thai Ministry of Energy on enabling sustainable business partnerships to support faster renewable energy take-up in the region. Thomas and his team facilitate market entry for renewable energy companies, do policy analysis, provide capacity building and contribute to developing reference projects in the renewable energy sector. Lately, he has been working on implementing fuel saver and hybrid-grids solutions on Thai islands, which made him look toward blockchain for new solutions.

19:20 - 19:35 David Oren, CEO of Solarly
Impulse Talk on Solar home system business model innovation & fresh insights from the customer validation journey in Cameroon
David is the co-founder and CEO of Solarly aimed at bringing energy and life changing services in Sub-Saharan Africa. Solarly equips rural households with an up-to-date, affordable, and connected SOLAR HOME SYSTEM  providing an easy access to electricity, creating opportunities for economic development, and giving self-sufficiency to the people. David was also involved with YEP! Young Entrepreneurs Project to educate a new generation of entrepreneurs.

19:40 - 19:55 Stefan Grosjean, CEO Smappee
Our homes are decentralized power hubs - Smappee is a traffic controller in a busy intersection
Stefan is an international expert in Smart Grids & Energy Management. He founded EnergyICT in 1991 and grew this company into a worldwide leader in the area of energy management solutions for Commercial & Industrial enterprises, DSO’s and utilities. Stefan has received multiple recognitions, incl. “Entrepreneur of the year” (2000) and “European Business Award for the Environment” (2010). He holds a Master Degree in Electric & Electronics Engineering as well as an MBA from the Vlerick Business School. One of the main drivers behind founding Smappee was frustration with the way the energy market works and the way the big energy companies are treating the consumers. Smappee gives the consumers the means to see through the mist and keep control over their own consumption.

20:00 - 20:15 Erwin Smole, CSO GridSingularity
High-performance Blockchain client and the decentralized app/agent platform for digitalizing energy systems
Erwin Smole is the Grid Singularity co-founder responsible for strategy. Boasting over two decades of experience in the energy sector, Erwin held senior management positions in utilities, regulatory authority E-Control, and PwC. He has further been engaged in strategic business advisory for government and international organizations (UNDP, EU), as well as a range of energy market entrepreneurs. Erwin is a recognized expert in development of new business cases in the both regulated and non-regulated energy market segments.

20:20 - 20:35 Francois Sonnet, Co-Founder ElectriCChain, SolarCoin evangelist
SolarCoin - Mining the Sun
François graduated with a Masters in Finance (hons) from the ICHEC Brussels Management School and is an expert in Renewable Energy Business Development, having co-founded Belgium’s N°1 award-winning solar installer SunSwitch SA in 2007 at a time of little focus for Renewables outside of Germany and Luxemburg, and helping it build up to +90 employees and +20MEur Turnover in under 30months. François made his first steps at Euronext NV, Lloyds International Banking Group and the Audit Firm PwC in Luxemburg and has extensively traveled. François brings Bizdev skills to the SolarCoin ecosystem, advising solar companies towards developing Blockchain tools to initiate the Energy Transition towards a world powered by clean energy.

20:40 - 20:55 Sebnem Rusitschka, Founder Freeelio
AdptEVE - an AI-aided energy app that maximizes solar power productivity; and how it all fits together
As a computer scientist since 2006, Sebnem has worked in various technology fields of P2P & Cloud Computing, Autonomous Control Systems, and Big Data Analytics for Siemens in the application domain of Energy Management. Yes, she holds the record in riding hype cycles – analyzing and prototyping trends for industrial viability. 2016 for her marks the year in which many of these emerging technologies started climbing the productivity hill. Hence, Sebnem founded freeelio, a startup committed to bring fair and free electricity to the masses with the help of a decentralized distributed intelligence.

Until 22:00 
Discussions, Drinks, and Networking

So if you are in Munich those days, I hope you can join us for some interesting discussion.

sabato 19 novembre 2016

Docker command list

Docker is an open source project for the management of software containers that contain complete ecosystems to run software, with the main goal of isolating software components and simplify their deployment.
Accordingly, Docker provides a virtualization over the Linux kernel without the overhead of running virtual machines. This means that non-Linux systems run a Linux-based virtual machine Docker to run Docker containers. Docker exploits kernel namespaces and the cgroups to isolate and virtualize system resources for a collection of processes (e.g. process tree, CPU, network and mounted file systems), as well as union mounting which allows for the combination of separate directories into a one that is managed individually. This provides with a lightweight self-contained environment where software artifacts and their dependencies can be deployed. 

A docker file is a text file that describes the steps that Docker has to undertake in order to prepare a Docker image. This includes for instance the installation and setup of third party packages and libraries, the creation of the runtime environment (e.g, variables, filesystem folders). An image can then be created from a Dockerfile using the command docker build. See this page for further information for the creation of a Dockerfile and Docker images.
A docker image is an immutable template defining a snapshot of execution environment in terms of parameters and filesystem, while a container is the running instantiation of an image.
The Docker engine translates the image to a running execution environment, by initializing requested resources (e.g., CPU, memory) and starting specified processes inside it.
This means that after stopping the container changes are lost, unless the container state is saved (i.e. committed) to an image.

We can run a container from a local or remote Docker image as follows:
docker run -i -t <containerImageName> /bin/bash

with -it starting the container in interactive mode providing a terminal. Alternatively, -d can be used to run the container as a background service. In this case, it is possible to later start an interactive session by using the docker attach <containerID> command. However, this connects to the same session used to start the processes in the container. To actually start a new shell on the container the exec command can be used (i.e., docker exec -it <containerId> bash). The container can be given a name, for instance --name <cname>.  Using the --link and -p options, the container can be also started so as to reach the ports exposed by another container. The commands docker inspect and docker port can be used to list the port mappings of the container.
In addition, the -v option can be use to share a host folder as a specific folder on the Docker container. For instance, -v "$PWD":/current shares the current work directory (i.e., pwd) as the /current folder on the container.

Alternatively, data can be moved between a container and its host with:
docker cp <containerName>:/root/file.extension .
docker cp file.extension <containerName>:/root/file.extension

with the first copying from the host to the current work directory, and the second copying from the host to the container.

The processes running on a specific container can be monitored with:
docker top <containerId>

The process logs can instead be visualized using:
docker logs <containerId>
docker logs -f <containerId>

with f returning a continuous log stream.  While this works well, in practice for several containers a solution like logspouts (https://github.com/gliderlabs/logspout) is preferable to collect logs on a log server.

Similarly, resource usage of the container can be retrieved with:
docker stats <containerId>

which provides a CLI stream that can be closed with a Ctrl-C.
The container also exposes a REST api which can be polled by any HTTP client.

The container can then be gracefully stopped (i.e, SIGTERM) with:
docker stop <containerId>

Alternatively, the container can be interrupted (i.e., SIGKILL) with:
docker killed <containerId>

All available containers can be listed with:
docker ps -a

List only the running Docker containers:
docker ps

List latest created containers:
docker ps -l

To show the changes made over a container:
docker diff <containerId>

The Docker container can be committed to a Docker image to possibly restore the state in future:
docker commit <containerId> <containerImageName>

Similarly, a container can be exported to a tarball as follows:
docker export <containerId> > tarball.tar

The tarball can consequently be imported with:
docker import <containerImageName> < tarball.tar

Alternatively, the container can be deleted with:
docker rm <containerId>

The following commands remove respectively all containers and all images available:

docker rm $(docker ps -a -q)
docker rmi $(docker images -q)
The list of Docker images available locally can be retrieved with:
docker images

An image can be looked on the Docker hub with:
docker search <containerImageName>

An image can be downloaded from the hub with:
docker pull <containerImageName>

A Docker image can be deleted from disk as follows:
docker rmi <imageId>

The list of Docker images available locally can be retrieved with:
docker images

An image can also be renamed and associated to tags (e.g., version) with:
docker tag <repository>:<currentTag> <newRepository>
docker tag <repository>:<currentTag> <newRepository>:<newTag>

To list all available commands
docker help

Docker provides a handy tool to ease the management of distributed systems.

Moreover, the introduced commands can be easily integrated into a continuous-integration (CI) / continuous-deployment (CD) pipeline, thus providing a complete solution for the setup, deployment and monitoring of distributed applications.

mercoledì 30 dicembre 2015

German Battery Maker Launches Clean Energy Trading | MIT Technology Review




"The German company Sonnenbatterie has launched a trading platform for distributed renewable energy by offering a way for owners of small solar and wind generation capacity to buy and sell power across the utility grid." More here

domenica 16 agosto 2015

Bridging Local-Area Networks (LAN) across buildings

Lately I had the need for bridging a second building placed 80/100 m away, in order to share the internet connection and interface remote cameras.

The CPE mounted on a TV Antenna
The best quality-price solution I have found is the Ubiquity Nanostation M serie. Main products are the M2 and M5, which differ for their operating frequency, 2.4GHz and 5 GHz respectively. In particular, I opted for the "loco" version which is slightly less powerful and cheaper than the full M2.

A quick installation intro is available here and here. What's required is basically to configure one of the CPEs as Access Point, which creates a new IEEE 802.11ac wireless network which can be secured with a passkey. It's also important to correctly refer to the gateway in the network mask.

Similarly, the other CPE can be set as station to connect to the network just created. That's it.

I am using a switch to connect to the station. Apparently the CPE also forwards DHCP requests, which allows the gateway to seamlessly distribute IP addresses back to the second network.

Stepwise guides are available here and here.

So far so good, although I am curious to see how it behaves under bad weather conditions, especially for prolonged time. The case does not really look that robust for outdoor applications, still I am quite satisfied with the performance of the whole deployment.


lunedì 27 luglio 2015

Mjölnir - the open source energy advisor


We recently announced the release of the stable version 0.2 of our open source energy management system Mjölnir at http://mjoelnir.sourceforge.net.

While the tool targeted so far mostly "disaggregated" device-level energy and power usage, we have recently introduced full support for circuit-level measurements (buildings, rooms) which unlocks a considerable potential for further data analysis.
The DIN-RAIL module running the measurement system

As most of energy meters use the industrial automation system ModBus, we have been looking for possible shields to extend our open hardware solution with RS485 communication. We finally selected this RPi hat from Libelium while the meter is the Carlo Gavazzi EM24. The implementation is eased by the Libelium ArduPi library, which makes the C code written for Arduino compatible with the Raspberry Pi. The data is then being sent to our servers through a REST interface.

The overall component shown on the picture is therefore a low cost solution able to retrieve remote measurements via the RS485/ModBus and the USB/ZigBee network. This opens for the future integration of other measurement units, such as water and gas meters.

The support of circuit-level measurements required changes on the Mjölnir system.

The system is now organised in buildings, rooms and devices. A circuit is described by its ID and can be associated to a single building or room.

The just introduced device control is implemented using web sockets, so that both the dashboard and the gateway can keep a TCP connection open to publish/subscribe for events. This seemed the lightest and easiest-to-develop alternative to XMPP, RabbitMQ (AMQP) and MQTT.

Below is a video providing a walk through the system.


As usual, the code of the gateway is available on source forge, along with the dashboard system.

Bibliography:
  1. A. Monacchi, F. Versolatto, M. Herold, D. Egarter, A. M. Tonello, and W. Elmenreich. An open solution to provide personalized feedback for building energy management. ArXiv preprint arXiv:1505.01311, 2015.

lunedì 13 luglio 2015

Using web socket to manage remote devices


The advent of web sockets opened to full-duplex communication channels between browsers and web servers. Being based on the port 80. this has also the benefit to bypass firewall policies. Perfect application for this technology are real-time messaging, such as multiplayer games and chats. 

We discuss hereby an example of a client and server to remotely control IoT devices based on web sockets. The server uses jdbc to connect to an internal MySQL database for authentication/authorization, the simplejson java library to format messages, as well as the org.java-websocket 1.3.0 library for its networking aspects.


class Server extends WebSocketServer{
private java.sql.Connection dbConn;
  private Map< WebSocket, String> authenticatedConnections;
  private Map<String, ArrayList<WebSocket>> subscriptionControllers;
  private Map<String, ArrayList<WebSocket>> subscriptionDashboards;
  public Server(int port) throws UnknownHostException {
        super(new InetSocketAddress(port));
  authenticatedConnections = new HashMap< WebSocket, String>();
  subscriptionControllers = new HashMap<String, ArrayList<WebSocket>>();
  subscriptionDashboards = new HashMap<String, ArrayList<WebSocket>>();
  }
  public void setDBMSCredentials(String host, int port, String user, String passwd, String db){
  try {
  dbConn = DriverManager.getConnection("jdbc:" + "mysql" + "://" + host + "/" + db , user, passwd);
  System.out.println("Connected to database "+db+" at "+host);
  } catch (SQLException e) {
  e.printStackTrace();
  }
  }
  @Override
  public void onOpen(WebSocket conn, ClientHandshake arg1) {
  System.out.println( conn.getRemoteSocketAddress().getAddress().getHostAddress() + " connected!" );
  }
  @Override
  public void onClose(WebSocket conn, int arg1, String arg2, boolean arg3) {
  System.out.println( conn.getRemoteSocketAddress().getAddress().getHostAddress() + " has disconnected!" );
  // remove the connection from both the authenticated and the subscribed ones if available
  if(authenticatedConnections.containsKey(conn)){
  String authkey = authenticatedConnections.get(conn);
  // remove also from the subscriptions
  if(subscriptionDashboards.containsKey(authkey)){
  subscriptionDashboards.get(authkey).remove(conn); // remove the subscription if available
  }
 
  if(subscriptionControllers.containsKey(authkey)){
                subscriptionControllers.get(authkey).remove(conn); // remove the subscription if available
}
}
}

  @Override
  public void onError(WebSocket conn, Exception arg1) {
  //arg1.printStackTrace();
  System.out.println("Error while interacting with "+conn.getRemoteSocketAddress().getAddress().getHostAddress()+":\n\t"+arg1.getMessage());
}

  @Override
  public void onMessage(WebSocket conn, String s) {
  JSONParser parser = new JSONParser();
  try {
  JSONObject obj = (JSONObject) parser.parse(s);
  // check authentication key
  String authkey = (String) obj.get("authkey");
  String type = (String) obj.get("type");
  String action = (String) obj.get("action");
  // authenticate the user first
  if(authenticatedConnections.containsKey(conn) ||
               this.authenticateUser(authkey)){
  // check if the connection was already authenticated
if(!authenticatedConnections.containsKey(conn))
                    authenticatedConnections.put(conn, authkey);
  // check the operation type
  switch(type){
  case "device_status":
  System.out.println(authkey+": received "+action+" device_status from "
                                  +conn.getRemoteSocketAddress().getAddress().getHostAddress());
if(action.equals("subscribe")){
// check if an entry exists for the given key
if(!subscriptionDashboards.containsKey(authkey)) 
                                subscriptionDashboards.put(authkey, new ArrayList<WebSocket>());
                                // avoid get on missing key
if(!subscriptionDashboards.get(authkey).contains(conn))     
                                subscriptionDashboards.get(authkey).add(conn);
  // avoid multiple entries for the same connection
System.out.println("\tAdding "+authkey+" to subDash");
}else{ // publish
  if(subscriptionDashboards.containsKey(authkey))
                                this.sendAll(s, subscriptionDashboards.get(authkey));
}
break;
case "device_control":
System.out.println(authkey+": received "+action+" device_control from "
                                           +conn.getRemoteSocketAddress().getAddress().getHostAddress());
if(action.equals("subscribe")){
// check if an entry exists for the given key
if(!subscriptionControllers.containsKey(authkey)) 
                                subscriptionControllers.put(authkey, new ArrayList<WebSocket>());
  // avoid get on missing key
if(!subscriptionControllers.get(authkey).contains(conn))   
                                subscriptionControllers.get(authkey).add(conn);
  // avoid multiple entries for the same connection
System.out.println("\tAdding "+authkey+" to subContr");
}else{ // publish
                            if(subscriptionControllers.containsKey(authkey))
                                this.sendAll(s, subscriptionControllers.get(authkey));
}
break;
}
}
} catch (ParseException e) {
// out of protocol format
System.out.println(conn.getRemoteSocketAddress().getAddress().getHostAddress()
                               +": out of protocol format!");
}
}
public void sendAll(String message, ArrayList<WebSocket> conns){
Iterator<WebSocket> i = conns.iterator();
while(i.hasNext()){
WebSocket s = i.next();
if(s.isOpen()){
s.send( message );
System.out.println("\tRouting to "
                      +s.getRemoteSocketAddress().getAddress().getHostAddress());
}
}
}

public boolean authenticateUser(String authkey){
boolean auth = false;
if(dbConn != null){
try {
Statement stmt = dbConn.createStatement();
// prevent sql injection by using a prepared statement
java.sql.PreparedStatement ps = dbConn.prepareStatement("SELECT * FROM advisor.user WHERE user.authkey = ?");
                ps.setString(1, authkey);
ResultSet rs = ps.executeQuery();
if(rs.next()){
auth = true;
}
} catch (SQLException e) {
e.printStackTrace();
}
}
return auth;
  }
}

The server main function might look like this then:

public static void main(String[] args) throws IOException, TimeoutException,     
                    ShutdownSignalException, ConsumerCancelledException, InterruptedException  {
    DeviceManager dm = new DeviceManager();
    Server srv = dm.new Server(8887);

    srv.setDBMSCredentials("localhost", 3306, "root", "", "advisor");
    srv.start();
    System.out.println("Websocket server started");
}

We can now test the server by subscribing for control events on one side and publishing control events on the other one. Let's do this in Python for simplicity.

import threading
import sys, os
from threading import Timer
from websocket import create_connection, WebSocketConnectionClosedException
import json
from plugwise import Stick, Circle, TimeoutException
import serial

self.ws = create_connection(self._control_interface)
print "connection with remote server established!"
# subscribe to control events for the given authentication key
self.ws.send(json.dumps({'authkey':'authkey-for-user',
                         'action': 'subscribe',
                         'type':'device_control'}))
print "successfully subscribed to device control events!"
# start a listener for control events
threading.Thread(target=self.listen_device_control_events).start()

This basically creates a connection to the server, sends a subscribe for 'device_control' events and spawns a thread listening for incoming events, which is implemented as follows:

def listen_device_control_events(self):
    try:
        while not self._exit: # listen for incoming commands until the script is voluntarily terminated
            try:
                received = json.loads( self.ws.recv() )
                if 'type' in received.keys() and received['type'] == 'device_control': # the 'is' is for object comparison
                    print 'received control event to ', received['payload']['state'], 'for', received['payload']['device_id']
                    self.set_circle(received['payload']['device_id'], int(received['payload']['state']))
            except ValueError as e:
                print e
            except WebSocketConnectionClosedException:
                print "the server closed the socket"
            else:
                self.ws.close()

Upon reception of control events, the code simply calls the method set_circle (defined in the python plug wise library) to control the switch status of connected ZigBee wireless nodes.

Now a device_status event can be published each time a device becomes available in the network, by simply calling the following function:

def update_circle_status(self, circle_id, status):
    self.ws.send(json.dumps({'authkey':self._settings['apikey'],
                             'action''publish',
                             'type':'device_status',
                             'payload':status}))

For instance, an event can be sent when initiating all ZigBee circles as follows:

self.circles = {}
    for c in self._settings['circles']:
        self.circles[c] = Circle(c, self.stick)
        self.update_circle_status(c, 1) # send all connected circles as available

Here is a simple implementation for the controlling client, which simply sends control events based on user's input:

from websocket import create_connection, WebSocketConnectionClosedException
import json
import threading

ws = create_connection("ws://143.205.116.250:8887")

while True:
    input = raw_input("Insert status of device")      

    ws.send(json.dumps({'authkey':'authkey-for-user',
'action': 'publish',
'type':'device_control',
'payload':{'device_id':'000D6F00035589CF','state':input}}))
ws.close()

The code was integrated into the Mjölnir gateway to provide an event mechanism for remote device control.

Authenticating and authorising users in RabbitMQ

RabbitMQ is a messaging broker, basically an infrastructure providing message queues to which applications can push and pull data messages. Such time asynchrony provides a decoupling between data producers and consumers, as well as interoperability between clients running on different machines and technologies. In particular RabbitMQ implements the Advanced Message Queuing Protocol (AMQP), a lightweight application-level binary protocol based on TCP [1]. AMQP together with the MQTT, XMPP and CoAP, is emerging as the leading machine-to-machine application protocol for internet-of-things applications [2]. An important aspect of RabbitMQ is also the reliability of its message queues (e.g. persistence of messages, retransmission, etc.) as well as its scalability over clusters of computers [3]. Its plugin mechanism also allows for connecting the broker to others written in different protocols, such as MQTT. A relevant issue in M2M application protocols is, however, authentication and authorisation management. This is mainly due to the fact that those protocols originated from the WSN community, where the network was normally managed by the same corporate or individual. 

A basic way is to add users and define their privileges using the rabbitmqctl command line interface, or the rabbitmq http API [4]. Accordingly, access control is provided by RabbitMQ using an internal database, which is initialized to a guest user, entitled to login only from localhost. 

The main alternative we discuss here is the use of an external authentication server, based on a RESTful interface. In particular, we use the rabbitmq_auth_backend_http plugin available at [5]. 
As also shown in [6], the plugin can be simply downloaded with a WGET from the community plugins [7], placed directly in the folder rabbitmq/plugins, and enabled with ./sbin/rabbitmq-plugins enable rabbitmq_auth_backend_http. The next step is to configure the plugin to connect to our authentication server. To this end we need to modify the ./etc/rabbitmq.config file which should look like this:

[
  {rabbit, [{auth_backends, [rabbit_auth_backend_http]}]},
  {rabbitmq_auth_backend_http,
   [{user_path,     "http://www.domainname/rmq_auth.php"},
    {vhost_path,    "http://www.domainname/rmq_auth.php"},
    {resource_path, "http://www.domainname/rmq_auth.php"}]}
].

with the first line enabling the auth_http backend and the second one properly configuring the goal server. As shown in [5], the tool will mainly perform 3 calls, user_path to authenticate a user, vhost_path to grant access to a certain vhost and resource_path to grant access to a specific resource, such as a queue or an exchange. We provide below a very simple script, granting access to the default '/' vhost to all users registered in our MySQL db. In particular, we avoid sending username and password in favour of a hashed token (authentication_key). For simplicity, we omit here the connection detail and the "SELECT * from user WHERE authentication_key = %s" query implemented in the getUser function.

<?php

// Authentication script for RabbitMQ
include("connect.php");

 if(isset($_GET['username']) ){ 

$authkey = $con->escape($_GET["username"]);

// first of all check if the user exists
$user = $con->getUser($authkey);

  if($user == -1) {
echo "deny";
  }else{
  // go ahead with checking
  if( isset($_GET['password']) ){
  // AUTHENTICATION mechanism
  // having the authkey is enough, allow the user
  // USER
  echo "allow management";
 
  }else if( isset($_GET['name']) ){ 
  // AUTHORIZATION mechanism
  // both vhost and resource are set (RESOURCE)
  // a certain user can only access it's specific resource
$resource_name = $con->escape($_GET["name"]);

$valid_names = array($user, $user."_device_control", $user."_device_status");

// a user can only access 2 specific queues
if( in_array($resource_name, $valid_names) ){
echo "allow";
} else{
echo "deny";
}

//file_put_contents('filename.php', $resource_name);
 
  }else if( isset($_GET['vhost']) ){ 
  // only vhost is set (VHOST)
  echo "allow";
// for simplicity we only have '/' as unique host
 
  }else{
  echo "deny";
  }
 
  }

 }else{
  echo "deny"; // deny access to the user
 }

?>

The example Python code shown on the RabbitMQ site would now look for the sender like:

#!/usr/bin/env python
import pika

credentials = pika.PlainCredentials('authenticationkey-of-administrator', 'test')
parameters = pika.ConnectionParameters(credentials=credentials, host="143.205.116.250", virtual_host='/')
connection = pika.BlockingConnection(parameters)

channel = connection.channel()

channel.exchange_declare(exchange='Administrator', type='direct')
#channel.queue_declare(queue='Administrator_device_status')

channel.basic_publish(exchange='Administrator',
                      routing_key='device_status',
                      body='Hello World!')
print " [x] Sent 'Hello World!'"


connection.close()

and for the receiver:

#!/usr/bin/env python
import pika
from pika.exceptions import ProbableAccessDeniedError, ProbableAuthenticationError


def callback(ch, method, properties, body):
    print " [x] Received %r" % (body,)

credentials = pika.PlainCredentials('authenticationkey-of-administrator', '')
parameters = pika.ConnectionParameters(credentials=credentials, host="143.205.116.250", virtual_host='/')
connection = pika.BlockingConnection(parameters)

channel = connection.channel()
channel.exchange_declare(exchange='Administrator', type='direct')

queue_name = 'Administrator_device_status'
result = channel.queue_declare(queue=queue_name)
channel.queue_bind(exchange='Administrator', queue=queue_name, routing_key='device_status')


channel.basic_consume(callback,
                      queue=queue_name,
                      no_ack=True)
try:
print ' [*] Waiting for messages. To exit press CTRL+C'
channel.start_consuming()
except:

print 'Terminate'


Which basically creates an exchange called Administrator, with a queue named Administrator_device_status accessible solely by the administrator user. Such resource is therefore replicated for each user. For instance, this could allow users to remotely control their IoT devices.


Links:

  1. https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol
  2. http://postscapes.com/internet-of-things-protocols
  3. http://www.rabbitmq.com/features.html
  4. http://hg.rabbitmq.com/rabbitmq-management/raw-file/3646dee55e02/priv/www-api/help.html
  5. https://github.com/rabbitmq/rabbitmq-auth-backend-http
  6. http://correlia.blogspot.co.at/2012/07/rabbitmq-with-http-backend-plugin.html
  7. https://www.rabbitmq.com/community-plugins.html