Overview: My Internet of Things and MobileFirst adventure

Featured

UPDATED:  I’ve done some follow-up work on this project and rather than keep pasting new additions and changes onto the existing content, I have decided to rewrite the series to better reflect the process from start to finish.  If you have read the series before, you may fine some interesting new aspects added and others removed.  Some of the old content was no longer applicable due to updates in Bluemix services.  I also went a little off the deep end with an Apple Watch. 🙂

I am diving into a new project where my eventual goal is to have a mobile app on my phone communicate to a “thing” on the internet.  Yes, those are pretty poorly defined requirements so right off the bat I did some research to decide what the “thing” should be, what amazing task it should perform and how I should make it talk to my phone.

Due to its popularity and just because I thought it sounded cool, I decided my “thing” would be a Raspberry Pi.  These are cheap little single board computers that can run a number of open source operating systems and application packages.  With that decided, I ordered my Raspberry Pi starter kit and anxiously waited at the curb by my mailbox.

But what cool task would it do?  I wanted something related to home automation that was more than pressing a button and lighting a lamp, but I also wanted to keep it reasonably scoped.  I decided on a doorbell interface that would take a picture when the button is pressed and make that picture available to the mobile app.  Ok, for that I would need a camera module so I ordered that and went back to the curb by the mailbox to wait.

Now I knew what the thing would be and what task it would perform.  But how would it communicate to the mobile app?  After a little more research, I decided to leverage the IBM Bluemix platform and services it provides. Hey!  What a coincidence:  I work for IBM!  Ok, maybe it isn’t such a coincidence.  My plan all along was to demonstrate how I could use Bluemix and the IBM MobileFirst Services to build this app.  But in this process, I have learned a ton about non-IBM technology as well so this adventure is not at all just an IBM sales pitch.  But it will demonstrate how you can build applications leveraging cloud-based services and the Internet of Things.

While I was still waiting at the mailbox, I sketched out this architecture diagram for the eventual system:

Yes, this does look overly complicated for a doorbell.  But my not-so-hidden agenda here is to demonstrate how all these cloud and IoT components can come together to provide services for a mobile application.  There is lots to learn here, even if you don’t want to get into the whole system, so feel free to cherry pick whatever helps.

There are four main pieces of functionality in this project:

  1. Push Notifications
  2. Picture capture
  3. Video capture
  4. Apple Watch

I ended up doing a LOT of iterating.  I am going to spare you some of that and present the tasks in a more sequential fashion so you may find there are steps that don’t seem obvious at first, but hopefully you will see why I had to do them later.

  1. Raspberry Pi Setup
    1. Set up the Raspberry Pi hardware
    2. Install the Raspberry Pi software
    3. Creating my first Node-RED flow on the Pi
    4. Add in the camera
  2. Bluemix Setup
    1. Set up the Bluemix application environment
  3. Implementation
    1. Enabling Push Notifications
    2. Requesting and Receiving a Picture
    3. Watching video from the Pi on the Phone
    4. Apple Watch

Creating a Fan-out menu in a Hybrid mobile app

The other day, I set out to create a fan-out menu in an Ionic app.  You know the type – you tap on a button such as Share and options for things like Facebook, Twitter, etc. appear in a fan of icons.  Tapping on one of the icons would then post your message to the selected social media site.

fan-out menu

Per my usual modus operandi, I started Googling, hoping to find a nice tutorial or example I could “repurpose”, but surprisingly, I didn’t find much.  So this post will document the solution I came up with.  The completed code can be found at https://github.com/dschultz-mo/Ionic-fan-out-menu-demo.

Create a starter app

Let’s start by creating a blank Ionic app.  If you are new to Ionic, it is really simple to get started.  Follow the brief Getting Started instructions to install Node and Ionic.

Now create a project based on the tabs template:

$ ionic start demoApp tabs

For now, you can just say ‘no’ when asked if you want to create an ionic.io account, but I would highly recommend you revisit this and dig into Ionic a bit more if you are doing hybrid development.

Change directory into the new app project and start the ionic server.  This will launch the app in your default browser.

$ cd demoApp
$ ionic serve

Add a button

Let’s add a button to the bottom of the first tab.  Tapping this button will open and close our fan-out menu.

Open the file www/templates/tab-dash.html.  Add the code

<button id=share-button>
Share
</button>

just before   </ion-content>.

Open the file www/css/style.css.  Add the code

#share-button {
position: relative;
top: 0px;
}

Assigning a relative position with no offset may seem strange, but play along for a moment.  Hopefully the reason will become clear in a few more steps.

Save the files.  Because of live reload, the browser will instantly refresh and your new button will appear.

Share button.png

Add a single menu item

Let’s start with one menu item so we see how this works, then build from there.

Inside the button tag in tab-dash.html, add the following code:

<img class=menu-fan
src="https://www.facebookbrand.com/img/fb-art.jpg"
style="left:40px;top:40px"/>

This will add a Facebook icon image inside the button offset down and to the right.

Add some styling to the css file for the class called menu-fan that is applied to the item in our fan-out menu:

.menu-fan {
height: 20px;
width: auto;
position: absolute;
z-index: 1000;
opacity: 1;
}

Save the files.  You should now see the Facebook icon.

Facebook icon.png

Next, we need a way to have tapping the Share button to alternately hide and show the Facebook icon.  Angular has a built-in way to do that with ng-show. We will have taps on the share-button toggle the boolean value of a scope variable we will name showMenu using the ng-click directive.  We will then bind the ng-show directive of the Facebook img to that variable.  The html should look like this:

html-01.png

Save.  Now clicks will cause the Facebook icon to alternately appear and disappear.

A note on positioning… The icon img is being positioned using absolute positioning with the left and top offsets being specified in the style attribute.  This is why the Share button needed to be explicitly positioned as ‘relative’.  The rule is that absolute positioning is relative to the img’s first positioned (not static) ancestor element.  Positioning the Share button as relative stops the ancestor search for a non static element.

Add some bling

Without a lot of imagination, you can see how we could create several icons for multiple social media outlets, fan them out in an arc and associate a scope action to each so that tapping on any one of them would invoke a routine that does what we want.  We will do that in a moment.  But right now, the Facebook icon just appears and disappears.  Not very exciting. Let’s add a little pizazz to this one icon first, then add in the others.

The ng-hide directive simply adds an ng-hide class to the img when the user wants it hidden.  By default, all that does is apply the style “display: none !important” to the img.  But, you can actually override what the ng-hide class does!

Go back to the style.css stylesheet.  Add the following to the menu-fan class:

    -webkit-transition:all linear 0.2s;
-moz-transition:all linear 0.2s;
-o-transition:all linear 0.2s;
transition:all linear 0.2s;

Also, create a new .menu-fan.ng-hide class like this:

.menu-fan.ng-hide {
opacity: 0;
-ms-transform: rotate(360deg); /* IE 9 */
-webkit-transform: rotate(360deg); /* Safari */
transform: rotate(360deg); /* Standard syntax */
}

The entire stylesheet should look like this now:
stylesheet-complete.png

Save the file and click on Share.  Now when the Facebook icon appears, it spins as it moves into position from the upper left corner of the Share button.  It spins back and disappears on the next click.  Let’s look at how the CSS we added does that.

First, we added a transition to the menu-fan class.  What this says is that any time an attribute is changed on the object, it should transition according to the given rule (there are four rules here to ensure cross-browser compatibility).  This particular rule says that it should apply to all attribute changes and that the change should be applied linearly over 0.2 seconds.  There are a lot of transition options that you could experiment to add even more pizazz.

Next, we defined some attributes that will apply to the object only when the .ng-hide class is applied to it.  The first two are positional – when the object hides, we want it to move to the upper-left corner of the Share button.  The transition attribute means that it will move back and forth linearly over 0.2 seconds.  Opacity of 0 means it is transparent, so as it moves, the img will also fade in and out.

The transform attribute is probably the least obvious.  rotate(360deg) means the element will spin around one revolution as it moves and fades over 0.2 seconds.  Here again, you have a number of transforms you can apply if you are looking for a different effect.

Expand to multiple menu items

Ok, so let’s scale this out.  Let’s say you want to add four social outlets – Facebook, Instagram, Pinterest and Twitter.  You could calculate the top and left values required for each to make a nice arc and add an img definition for each one of them in the html – hard coding the position, image source location and action you want performed.  But there may be a better way.  First off, the Angular ng-repeat directive is made for this sort of stuff.  It repeats over an array of items and builds the html you need.  We should really be using that.  So instead of code that looks like this:

html-02.png

It will look like this:

html-03.png

  • ng-repeat – creates the img element for each item in a scope array called socialFanButtons.  We will define that array in a minute.
  • ng-src – the src of the img will be whatever the value of the src property of the array element is.
  • ng-click – clicking this image will invoke the function defined by the action property.
  • style – This will now pick up the left and right properties of the array element.

So now, we could just create an array in the DashCtrl controller like:

controllerArray

Well, that does work, but let’s use the computer to do what it was made for – to compute.  Let’s create a routine that will compute the left and top values so we don’t have to.  We will define an array with the src and action fields we need, but then create a function that will compute the left and top values we need based on a direction(degrees), spread angle(degrees) and distance(radius).


.controller('DashCtrl', function($scope) {
/**
* Builds a fan of menu items.
*
* @param fanItems – An array of items, each object including the image and action for the
* given menu item.
* [
* {
* src: "path-to-image-file",
* action: "scope variable function"
* },
* …
* ]
* @param direction – Direction in which the center of the fan should eminate from the
* center of the container object (in degrees):
* 0 degrees – right
* 90 degrees – down
* 180 degrees – left
* 270 degrees – up
* @param spread – Arc across which the icons should be dispersed, in degrees
* @param distance – Distance from the center of the container object to the upper-left
* corner of the icons (in pixels).
*/
var buildFan = function (fanItems, direction, spread, distance) {
var myFanItems = fanItems;
// Compute the angle between icons in the menu
var startAngle = direction – spread / 2;
var endAngle = direction + spread / 2;
var increment = Math.abs((startAngle – endAngle) / (fanItems.length – 1));
// Compute the X and Y locations of each icon and
// add to the array that will be returned.
for (var i = 0; i < fanItems.length; i++) {
var angle;
if (startAngle < endAngle) {
angle = startAngle + i * increment
} else {
angle = startAngle – i * increment
}
var x = distance * Math.cos(angle * Math.PI / 180);
var y = distance * Math.sin(angle * Math.PI / 180);
myFanItems[i].left = x + 'px';
myFanItems[i].top = y + 'px';
}
return myFanItems;
};
var fanItems = [
{
src: "https://www.facebookbrand.com/img/fb-art.jpg&quot;,
action: "postFacebook()",
},
{
src: "https://i1.wp.com/dennisschultz.files.wordpress.com/2016/03/instagram_icon_large.png?ssl=1&w=450&quot;,
action: "postInstagram()",
},
{
src: "https://winsomekaty.files.wordpress.com/2015/04/official-pinterest-logo-tile-300×300.png&quot;,
action: "postPinterest()",
},
{
src: "https://g.twimg.com/Twitter_logo_blue.png&quot;,
action: "postTwitter()",
}
];
$scope.showMenu = false;
$scope.socialFanButtons = buildFan(fanItems, 45, 90, 65);
})

view raw

DashCtrl.js

hosted with ❤ by GitHub

Conclusion

So here you see that by leveraging AngularJS directives, some creative CSS and some geometry, we have created a reusable pattern including a procedure and a stylesheet to generate fan-out menus.

Part 9: Apple Watch

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Ok, I know I’m getting really carried away on this thing now, but I got an Apple Watch and thought it might be kinda cool to see if I could create a Watch app for my doorbell app.

I’m going to focus on the unique challenges I ran into for this app and not get too deep into how you build a Watch app.  There are resources out there (although not as many as I had hoped) to teach you how.  One particularly good resource I purchased is a tutorials book from Ray Wenderlich called “watchOS 2 by Tutorials“.  This is definitely worth the price of admission if you are serious about learning how to build for the Watch.  One caution I will leave you with – make sure whatever resources you are using are for WatchOS 2.  There were significant changes between version 1 and version 2 and there is still quite a bit of WatchOS 1 info out there.

There are three features of the Doorbell app I would consider making available on the Watch.  Well, actually, there are really only three features in the whole Doorbell app:

  1. Push notifications when someone rings the doorbell.
  2. Displaying a picture from the Raspberry Pi
  3. Displaying video from the Raspberry Pi

It didn’t take me long to see that #3 is a long shot.  There are very few video viewing apps for the Watch and those that do exist provide the video content as files transferred to the Watch.  I considered rearchitecting things to capture short video clips to files that could be sent to the Watch, but I pretty quickly wrote that off.  I’ll focus on Push notifications and pictures.

A Watch app is not a separate app at all.  It is really an extension of the iPhone app.  A Watch app cannot exist without its “host” iPhone app.  You get started by adding a Watch target to your existing iPhone app.

Display a picture

My plan was to put a button on a Watch interface that would request a picture from the iPhone app and would then display it in a WKInterfaceGroup or something.  Creating the UI was pretty easy – there are a lot fewer options than on iOS.

The WatchConnectivity package gives you various ways to communicate between the iPhone and the Apple Watch.  Interfaces can be immediate (sendMessage) or background (transferFile, updateApplicationContext).  My first instinct was that I wanted it immediately so I should use sendMessage.  This actually did work, but sendMessage has a size limit on the payload.  I never did fine an authoritative source that defined the specific value, but unless I really reduced the resolution on my picture, I exceeded it.

So, I took the approach to have the Watch request the picture using updateApplicationContext.  The iOS app would then send the takePicture command to the Raspberry Pi, receive the picture in packets, reassemble it, then send it to the Watch using transferFile.  This actually worked brilliantly – as long as the iOS app was in the foreground.

This is where I ran into another constraint from Apple.  The iOS app is VERY limited in what it can do when in the background.  Namely, it can only use the network for vary limited reasons.  Publishing MQTT commands doesn’t seem to be one of them.  There are other ways to do this.  You can have an iOS app do some network operations if you can justify classifying it as a VOIP or News app.  Then it can do some http operations in the background.  I’m sure that’s how Watch apps such as weather apps get regular updates from the host app.

Nonetheless, I did get something to work as long as the iOS app is active.  I can get a picture from my Raspberry Pi to the Watch.

Refresh  Requesting  Returned

Push notifications on the Apple Watch

I learned that there is an involved set of rules that determines when a Push notification shows up on the Watch and when it shows up on the iPhone, even if you have built in the ability to handle the Push on the Watch.  You don’t get to choose – iOS makes the decision.  Basically, the notification will always go to the iPhone if it is not locked and asleep.  Even if it is, the Watch must be on your wrist before iOS will decide to let the Watch present the notification.  Makes sense I guess.  Why would you want a notification to show up on your Watch if you have the iPhone in your hand and you are looking at it?

The Watch app has an ExtensionDelegate class which is analogous to the AppDelegate of the iPhone app.  You override the method handleActionWithIdentifier to handle the custom action buttons (Picture and Video) I created in the AppDelegate class.  Now when the doorbell is pressed, Node-RED on the Raspberry Pi will send a request to the IBM Push Notifications service which will in turn have APNS send a push.  If the iPhone is asleep in my pocket, I will get a Push notice on the Watch with the Picture and Video buttons.

WatchPush

The problem is, what do I do now that I have received the Push on my Watch?  Since I already found that I can’t ask the iPhone app to go get me a picture and then update it on the Watch, I am kinda stuck.  I did end up implementing Handoff so once I tap either Video or Picture on the Watch, I can slide up on the lock screen icon on my iPhone and have it take me directly to the corresponding view and refresh it.

LockScreen

So as it turns out, the Apple Watch probably isn’t the ideal platform for a visual doorbell app.  But it was an interesting experiment.  At this point, I think I have milked this adventure for all it is worth.

Conclusion

Well, it has been quite an adventure.  I can’t say that I have a terribly useful app at the end of all this, but it did give me a platform to learn a lot of stuff.

If you are interested, the full code for the project can be found in GitHub.

 

Part 8: Watching video from the Pi on the Phone

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Let me say right off that there are dozens of alternatives available if you want to stream video from a Raspberry Pi to another device.  They all seem to have their pros and cons.  Some are really inefficient.  Others require a lot of processing and an additional app server.  Still others are very proprietary and only work on a very limited set of devices or platforms.  So, feel free to try your own approach.

I decided to go with the VLC media player.  I have known VLC as a good media player for a long time, but I had no idea it can do so many more things – in particular, act as a video streamer.

So here’s the basic plan:

  1. Configure the Raspicam to record video to a fifo file.
  2. Configure VLC on the Raspberry Pi to stream that fifo content to a URL
  3. Connect a VLC player object embedded in the iPhone app to the URL stream.

Install VLC on the Raspberry Pi

You can install the Debian image on the Raspberry Pi:

sudo apt-get install vlc

That was easy.

Add the VLC SDK to the iOS app project

There’s a Pod for that.  In fact there are multiple.  I went with MobileVLCKit.

Create the Raspberry Pi Node-RED flows

Let’s start on the Raspberry Pi side and deal with sending the command from the app later.  Let me warn you, this gets a little crazy.  There are several steps here that I probably could have done more simply in a shell script.  But I tried to stay in Node-RED as much as I could, which ended up complicating some things.  Here we go.

Start Stream Flow

Create the following flow:

 

 

I warned you.  Let’s go through the nodes:

  • startStream  Receive the IoT Foundation startStream command from the application.
  • mkfifo  Creates a fifo file on the filesystem.  This is an ‘exec’ node, which just runs the given OS command.  It returns stdout as the payload of the message, but in this case, I don’t really care about the contents of stdout.  mkfifo
  • start raspivid  Invokes the shell command “/home/pi/bin/startraspivid.sh”.  This shell script starts up raspivid recording 640×640 video at 25 frames per second.  This seemed to produce decent looking video yet kept the bandwidth requirements down.  These numbers could definitely be tweaked if needed.  raspivid is streaming to the fifo file named vidstream.  Yes, the rest of the command looks odd.  The issue was that I needed the node to launch raspivid and let it continue to run on another process yet return execution to the next node.  I tried a number of approaches and the only one that seemed to work was to background it (‘&’), but I found I also had to redirect stdout and stderr (‘ > raspividout.txt 2>&1’) or execution wouldn’t return to the node.


raspivid -t 0 -o vidstream -w 640 -h 640 -fps 25 > raspividout.txt 2>&1 &

  • start vlc  This is similar to the raspivid node.  It invokes a bash shell script that starts up VLC (‘cvlc’).  VLC will take the vidstream fifo file as input and produce a video stream on port 8554.  I had to do the same dance with backgrounding and redirecting output as I did with the raspivid script.
    VLC has a seemingly infinite number of command line switches and arguments.  I won’t claim to have figured all this out myself.  I relied heavily on the VLC documentation and a couple blogs on the Raspberry Pi forum.


cvlc -vvv stream:///home/pi/vidstream –sout '#standard{access=http,mux=ts,dst=:8554}' :demux=h264 i> vlcout.txt 2>&1 &

view raw

startvlc.sh

hosted with ❤ by GitHub

  • get IP Address  This function node figures out the IP address of the Raspberry Pi.  I need this because the iPhone app will need to know where to connect the VLC player to view the stream.  Now this brings up a significant limitation to the VLC approach – the Raspberry Pi must be visible to the mobile device on the network.  Unless you are willing to go through all the steps necessary to get your Raspberry Pi a public IP address and deal with all the security issues that will arise, that’s a pretty big limitation.  I decided I can live with the limitation that video only works while my phone is on my local Wifi network with my Raspberry Pi.


var ifaces = context.global.os.networkInterfaces();
var address = "";
Object.keys(ifaces).forEach(function (ifname) {
ifaces[ifname].forEach(function (iface) {
if ('IPv4' !== iface.family || iface.internal !== false) {
// skip over internal (i.e. 127.0.0.1) and non-ipv4 addresses
return;
}
address += iface.address;
});
});
msg.payload = JSON.stringify({"piIpAddress": address});
return msg;

  • streamStarted  An ibmiot output node that publishes the device event streamStarted with a payload that includes the IP address of the Raspberry Pi.

Stop Stream Flow

The Stop Stream flow is invoked by receiving a stopStream IoT Foundation command from the mobile app.  It simply kills the Raspian processes for vlc and raspivid and then publishes a device event acknowledging that it did.

stopStreamFlow

Stop Stream Node-RED Flow

    • stopStream  ibmiot input node that receives the command stopStream.
    • Stop Stream  An exec node that runs the script stopstream.sh.


pkill -e vlc
pkill -e raspivid

view raw

stopstream.sh

hosted with ❤ by GitHub

    • format Message  My mobile app doesn’t actually do anything with this, but I created this node to parse out the process identifiers (PIDs) of the vlc and raspivid processes that were killed and include them in the payload of the streamStopped event.


var temp = msg.payload.replace(/vlc killed \(pid /g, "{\"vlcpid\": ");
temp = temp.replace(/raspivid killed \(pid/g, "{\"raspividpid\": ");
temp = temp.replace(/\)/g, "}, ");
temp = temp.substring(0,temp.length-3);
temp = '{\"killed\": [' + temp + ']}';
msg.payload = JSON.stringify(JSON.parse(temp));
return msg;

  • streamStopped  An ibmiot output node that publishes the device event streamStopped with a payload that includes the PIDs of the killed processes.

Add functionality into iOS app

Again, I won’t go into all the details here, but I want to hit the high points.  What needs to be done in the app is

  1. Add a UIView object to a view to act as the ‘drawable’ for the mediaPlayer
  2. Add a VLCMediaPlayer object.
  3. Send the startStream command through IoT Foundation
  4. Receive the IP address and have the mediaPlayer attempt to connect to it.

Add a UIView object

I decided to create a tabbed view controller, making the first tab the Picture view controller and the second the Video view controller. The Video view controler just has a UIView on the storyboard with an IBOutlet in the view controller code.


@IBOutlet weak var movieView: UIView!

Add a VLCMediaPlayer object

With the MobileVLCKit pod added to my Xcode workspace, this is as simple as


private let mediaPlayer = VLCMediaPlayer()

In the view controller’s viewDidLoad method, I configure the mediaPlayer.


/* setup the media player instance, give it a delegate and something to draw into */
mediaPlayer.setDelegate(self)
mediaPlayer.drawable = movieView

Send the startStream command

In the view’s viewWillAppear method, I send the startStream command.  This is really the same as sending the takePicture command. I do provide a completion handler, but this time, the handler will only be invoked once – when the streamStarted event is published by the Raspberry Pi.

Receive the IP address and connect the MediaPlayer

The callback gets invoked when the streamStarted event is published.  The payload of the streamStarted event is the IP address of the Raspberry Pi, so all that is left is to set the mediaPlayer’s media value and tell it to play.


let url = NSURL(string: "http://&quot; + ipAddress + ":8554/")
let media = VLCMedia(URL: url)
mediaPlayer.setMedia(media)
mediaPlayer.play()

Remember, though, that as we discussed earlier, the mediaPlayer will not be able to connect to the stream if the Pi is not visible to the mobile device. If that is the case, the mediaPlayer object throws up an alert. I could have handled that in my code with a delegate, but I decided to just let the mediaPlayer deliver the bad news.

IMG_0673

Summary

There are a lot of ways to skin this cat.  I chose to use VLC because it was relatively easy (compared to other methods I considered), but it does have the significant limitation that my Raspberry Pi has to be network-ly visible to my iPhone.  But nonetheless, while I am in the same Wifi region, it is pretty cool that when I hear the doorbell ring, I don’t even need to get up out of my recliner to see if it is a siding salesman.  The world of WALL-E is fast approaching.

So next I went a little crazy.  I got a new Apple Watch and thought I would see if I could build it into this environment in the next post.

Part 7: Requesting and Receiving a Picture

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

The next step is to see if we can make the iOS app request a picture from the Raspberry Pi and have it respond with one.

I stared by creating a simple iOS app that had a button to start the request process and a UIImageView to display the picture.  The UI eventually evolved a bit, but this gave me a place to start and keep it simple.

MQTT Mobile Client software

There is no iOS SDK specific to the IBM IoT Foundation.  There are, however, several MQTT pods out on cocoapods.org that could do the job.  MQTT is the protocol underlying the messaging in the IoT Foundation.  I chose to use MQTTClient.  I followed the simple instructions to install using cocoapods.  MQTTClient provides interfaces at various levels.  The simplest way to use MQTTClient is through MQTTSessionManager.

IBM IoT Foundation conventions

The IoT Foundation does layer on some extensions that are supported by conventions you must observe when using MQTT. The full IoT Foundation documentation is very helpful.

First of all, the IoT Foundation considers there to be two types of “things” in the Internet of Things: devices and applications.

Devices:

  • A device can be anything that has a connection to the internet and has data it wants to get into the cloud.
  • A device is not able to directly interact with other devices.
  • Devices are able to accept commands from applications.
  • Devices uniquely identify themselves to the IoT Foundation with an authentication token that will only be accepted for that device.
  • Devices must be registered before they can connect to the IoT Foundation.

Applications:

  • An application is anything that has a connection to the internet and wants to interact with data from devices and/or control the behaviour of those devices in some manner.
  • Applications identify themselves to the IoT Foundation with an API key and a unique application ID.
  • Applications do not need to be registered before they can connect to the IoT Foundation, however they must present a valid API key that has previously been registered.

For some reason, it took me a while to wrap my head around this.  I wanted to consider my iPhone a “device”, but in fact, it would be an “application”. For one reason, it is going to be sending commands to get information from the Pi.  Secondly, note that you don’t register applications in advance – they just provide a unique key.  This would be important if I were to do into production with my doorbell.  I can’t register every possible mobile phone in advance.

Commands:

  • Commands are the mechanism by which applications can communicate with devices. Only applications can send commands, which must be issued to specific devices.

Events:

  • Events are the mechanism by which devices publish data to the Internet of Things Foundation.

So my iOS app will be an application that will send commands and subscribe to events that will be published by my Raspberry Pi which is a device.  Got it.

Here are some other MQTT concepts and their manifestation in IoT Foundation.

Connection parameters

  • MQTT host:  The host of the IoT broker is org_id.messaging.internetofthings.ibmcloud.com where ord_id is the org assigned to your IoT Foundation service when it was created.  You can find it from the Bluemix dashboard for your application.
  • MQTT client identifier:  This is an identifier the application must provide when it connects to the IoTF.  it must be unique.  The convention is a:org_id:app_id, where ord_id is the same as above, and app_id is something unique for each app instance.  Since there will only be one app instance at a time for my simple app, I just used the Bluemix AppID for this.
  • MQTT username: Use the IoT Foundation API Key generated when you created your IoT Foundation service.
  • MQTT password: Use the IoT Foundation Authentication Token

Topic names

The IoT Foundation uses a strict convention for MQTT topics that map to device types, device IDs, commands, and events.  Reference the documentation for all the details, but here are the topic strings I used for the command I would send from the application and the event I would subscribe to:


takePictureCommand = "iot-2/type/" + deviceType + "/id/" + deviceID + "/cmd/takePicture/fmt/json"
pictureTakenEvent = "iot-2/type/" + deviceType + "/id/" + deviceID + "/evt/pictureTaken/fmt/json"

view raw

topics.swift

hosted with ❤ by GitHub

Connecting to IoT Foundation from the Application

Again, sparing some of the details, here’s my connect() method:


func connect() {
if (iotfSession == nil ||
iotfSession?.state != MQTTSessionManagerState.Connected) {
let host = orgId + "." + ioTHostBase
let clientId = "a:" + orgId + ":" + ioTAPIKey
iotfSession?.connectTo(
host,
port: 1883,
tls: false,
keepalive: 30,
clean: true,
auth: true,
user: ioTAPIKey,
pass: ioTAuthenticationToken,
will: false,
willTopic: nil,
willMsg: nil,
willQos: MQTTQosLevel.AtMostOnce,
willRetainFlag: false,
withClientId: clientId)
// Wait for the session to connect
while iotfSession!.state != MQTTSessionManagerState.Connected {
NSRunLoop.currentRunLoop().runUntilDate(NSDate(timeIntervalSinceNow: 1))
}
// Subscribe to the pictureTaken event
iotfSession!.subscriptions = [pictureTakenEvent!: 2]
}
}

Receiving the picture

First step is to send the takePicture command to the Raspberry Pi through the IoT Foundation.  The actual content of the message is irrelevant in this case.  The operation on the Pi will be invoked just by receiving the command.


iotfSession!.sendData(
createMessage(dictionary),
topic: takePictureCommand,
qos: MQTTQosLevel.ExactlyOnce,
retain: false)

The picture will come back from the Raspberry Pi as a Bas64 encoded string as the data in the pictureTaken event.  This wasn’t as easy as I expected.  Turns out that MQTT has a limit to the size of the messages it will transmit.  Who knew?  Well, I didn’t anyway.  So, as you will see shortly, I break the picture up into 3kb chunks and send them back in sequence.  So on the application side, I created a completion handler that would piece the picture back together again.  The completion handler gets called each time a packet event arrives.

The MQTTSessionManagerDelegate routine handleMessage gets invoked each time a packet arrives, which then invokes the callback.


extension MQTTDoorbellClient: MQTTSessionManagerDelegate {
func handleMessage(data: NSData!, onTopic topic: String!, retained: Bool) {
do {
// try to convert the data to a dictionary
let obj = try NSJSONSerialization.JSONObjectWithData(
data,
options: NSJSONReadingOptions.MutableContainers)
switch topic {
case pictureTakenEvent!:
if let callback = self.pictureCompletionHandler {
callback(obj)
}
default:
logger.logInfoWithMessages("Unrecognized topic \(topic)" )
}
} catch {
logger.logInfoWithMessages("Could not convert message data")
}
}
}

The callback is smart enough to know when it receives the final packet and then completes the reconstruction of the image and eventually displays the picture in the UIImageView.


let picId = packet["pic_id"] as! String
let data = packet["data"] as! String
let pos = packet["pos"] as! Int
let total = packet["size"]as! Int
// Initialize picture if this is the first packet
if pos == 0 {
let timestamp = packet["pic_date"] as! Double
picture = Picture(name: picId, date: NSDate(timeIntervalSince1970: timestamp / 1000.0))
}
// finalize the picture and call completion handler if this is the last packet
if pos == total – 1 {
picture.addPacket(data, finalPacket: true)
pictureImage.image = picture.getImage()
} else {
picture.addPacket(data, finalPacket: false)
}

Servicing the command from the Raspberry Pi

There are Node-RED nodes available to communicate directly to the IoT Foundation from within Node-RED.  Follow the instructions for node-red-contrib-scx-ibmiotapp to set it up in the Raspberry Pi.

With the ibmiot nodes in place, create the following flow:

Node-RED flow that takes a picture

Some details:

  • takePicture  This is the IoT node that receives the takePicture command from the application.  Configure it with the requested credentials.  The device type and device id must match those you created for the Raspberry Pi in the IoT Foundation dashboard.  The command is “takePicture”.
    takePictureNode
  • Take Picture  This function node is pretty much the same as in an earlier blog post.  It invokes the camera through JavaScript in the Node.js environment.  It sends on a message containing the picture’s filename and timestamp.


var encoding = "png";
var currDate = new Date()
var currTime = currDate.getTime();
// Use the current timestamp to ensure
// the picture filename is unique.
var pictureFilename = "/home/pi/pictures/" + currTime + "." + encoding;
var opts = {
mode: "photo",
encoding: encoding,
quality: 75,
width: 400,
height: 400,
output: pictureFilename,
timeout: 1000
};
// Use the global RaspiCam to create a camera object.
var camera = new context.global.RaspiCam( opts );
// Take a picture
var process_id = camera.start( opts );
return {payload: pictureFilename,
filename: pictureFilename,
filedate: currTime};

view raw

TakePicture.js

hosted with ❤ by GitHub

  • delay 2 s  The camera is asynchronous so you need to give it a little time to take the picture and save it to a file before processing.
  • Base64 encode  This is an exec node which runs an operating system command.  The command value is “base64 -w0” and the msg.payload value is checked.  This means an OS command base64 will run against the filename provided by the Take Picture node.  “-w0” keeps base64 from injecting newline characters in the string.  The Base64 string is sent to the next node as the payload.
    Base64Node
  • splitIntoPackets  This function node slices the payload into an array of messages containing 3k chunks of the data.  The date, picture name, number of packets and packet index are all added to each message as well.  The array becomes a series of messages sent by the node.  The node also has an output that sends the picture name and one that sends the total number of packets.  These are just for debugging purposes.


packet_size = 3000;
var picId = msg.filename;
var picDate = msg.filedate;
var encoded = msg.payload;
var len = encoded.length;
var no_of_packets = Math.ceil(len/packet_size);
var pos = 0;
var start = 0;
var end = packet_size;
var outputMsgs = [];
while (start <= len) {
data = {
"data": encoded.substring(start,end),
"pic_id":picId,
"pic_date" : picDate,
"pos": pos,
"size": no_of_packets};
outputMsgs.push({payload:JSON.stringify(data)});
end += packet_size;
start += packet_size;
pos = pos + 1;
}
return [outputMsgs, {payload: picId}, {payload: no_of_packets}];

  • pictureTaken  This ibmiot output node sends the device event “pictureTaken” for each packet.  Use the same values for authentication, device type and device id that you used for the ibmiot in node.
    PictureTakenNode

Where are we?

Ok, we covered a lot of ground in this post.  We talked a bit about MQTT and how IoT Foundation fits with it.  We looked at how the Swift code will use the MQTTClient library to send commands and receive picture packets and reassemble them.  We also looked at how the Raspberry Pi will receive the IoT command through the IoT input node, use the Raspicam Node module to take a picture, use an OS command to convert it to Base64, use a function node to packetize it, then use the IoT output node to send the packets back to the phone as events.

I didn’t show it here, but I added some code to have the iOS app invoke this whole process in response to the user’s choice from the Push notification.  So I accomplished what I had set out to do.  The Pi sends a push notification to the phone to let the user know there is someone ringing the doorbell.  The user can then choose to see a picture of the visitor by tapping Picture in the push notification.  The app will then send an MQTT command to the Pi through the IBM IoT Foundation.  The Pi then takes a picture and sends it back to the phone which displays it.

Next

I decided to go for some bonus points.  What if the user could actually watch video of the visitor?  Next post.

“Containing” your MobileFirst server

IBM MobileFirst Platform Foundation version 7.1 adds support for running the platform in a container on the cloud in IBM Bluemix.  The idea is that anyone should be able to spin up a MobileFirst server in the cloud, deploy applications and adapters to it and reconfigure mobile apps to use it quickly and easily.  I thought I would give it a try with my Internet of Things Doorbell app and see how easy it really is.

Here is the architecture of my doorbell system.  The only parts that should be affected are the MobileFirst Platform Foundation (MFP) server component and the native iOS Mobile App itself.

Doorbell Architecture

Doorbell Architecture

MobileFirst 7.1 provides two options for moving to containers: the prebuilt Getting Started Image and the ability to roll your own image based with Evaluation on Containers.

The Getting Started Image is really meant for kicking the tires and evaluating MFP on Bluemix.  It is not at all intended for production.  The server, data proxy, analytics server, database and a sample application are all crammed into one container.  You also cannot make any runtime changes such as renaming the prebuilt runtime, adding more runtimes or customizing the runtime such as adding back-end code, SSL certificates, etc.  The up-side is that you should be able to deploy the prebuilt image very quickly and easily add your own adapter and application.

The Evaluation on Containers is meant for building production deployments.  Why it is called “Evaluation” when Getting Started is really for evaluations is a mystery to me.  At any rate, Evaluation on Containers is not a prebuilt container.  It is actually a “package” of artifacts you use to build your own custom container in Docker.  You build the container locally on your machine then push it up to Bluemix Containers to run it.

For today’s experiment, I’m going to use the prebuilt Getting Started Image.

Create the MFP container

Creating the container is similar to creating an application on Bluemix.

  1. Log into Bluemix at https://ibm.biz/IBM-Bluemix.  If you don’t yet have a Bluemix ID, get one for free by clicking the SIGN UP link in the upper right corner of the landing page.
  2. From the Dashboard, click START CONTAINERS.

    Start Containers

    Start Containers

  3. On the next page, there will be a list of all the available prebuilt container images.  Click ibm-mobilefirst-starter.  This is the Getting Started Image.

    ibm-mobilefirst-starter container image

    ibm-mobilefirst-starter container image

  4. This brings up the creation page for the container.  There are a few things that need to be configured on this page which may not be obvious but are very important.
    1. Provide a Container name.  This can be whatever you want but must be unique within your “space”.
    2. Select the Size of the container.  The Knowledge Center recommends at least the “Small” configuration.
    3. The Public IP address field defaults to Leave Unassigned.  You definitely don’t want that.  The container will need a public IP so that the iOS app can connect to it.  If a public IP has already been allocated to the account, select it here.  If not, select Request and Bind Public IP.
    4. Finally, assign the Public ports that will be opened.  It is required that ports 80, 9080 and 443 be opened.  This field is a little confusing.  The port numbers must be entered separated by commas.  The placeholder example shows them also separated by spaces but the field actually does not allow spaces.  Enter 80,9080,443.

    GettingStarted Container creation page

    GettingStarted Container creation page

  5. Click Create to deploy the container.  After only a few seconds, “Your container is running” will appear on the container’s overview page.  Note the public IP address that was bound to the container.  This is how users and apps will get access to it.

    Container Overview Page

    Container Overview Page

Register with the Container

The admin user that will be used to access the container requires a password.  This is the opportunity to define it.  Please be more creative with your password than “admin”.  Anyone that knows the public IP address of the container will be able to get to this page and gain access to the Operations Console if they guess the password.

  1. Open a browser tab to the Public IP address. (i.e. http://134168.6.227, in my case).  The first time this URL is opened, the container registration page will be shown.  Enter a password here for the admin username.
  2. Click Register.

    Container Registration Screen

    Container Registration Screen

  3. After a few seconds, the MobileFirst Operations Console will appear.  There is one existing runtime, MobileFirstStarter, which contains three applications and three adapters.  These support the WishList sample application.

Accessing sample code, Operations Console and Analytics

The next time the container’s Public IP address is opened, a landing page will be shown that enables you to download the WishList sample application code, open the Operations Console to manage your applications, open the Analytics Console to view analytics and data about your apps and download server logs for troubleshooting.

Container Landing Page

Container Landing Page

Update the MobileFirst project

Some modifications to the settings in the MobileFirst project containing the application and adapters are necessary to prepare the artifacts for deployment to the new runtime.  I have a MobileFirst project called DoorbellIOSNative that contains the CloudantAdapter adapter and an AIOiOS SDK app.  In my case, the adapter is completely portable and will require no changes to run it on a new rutime.  I will, however, need to modify the SDK app’s worklight.plist file to point to my new host/port and the name of the new runtime.

  1. Open the DoorbellIOSNative/apps/APIiOS/worklight.plist file in an editor.
    1. Replace the host value with the Public IP of the container.
    2. Replace the port value with 9080.
    3. Replace the wlServeContext value with “/MobileFirstStarter/”
  2. Rebuild the project and prepare it for deployment.  From the MFP command line tool, use the command ‘mfp push‘.  From MFP Studio in Eclipse, Run As > Run on MobileFirst Development Server.

Deploy the adapter and application to the container

Now the application and adapter need to be deployed to the MobileFirst runtime running in the container on Bluemix.

  1. Open the MobileFirst Operations Console (http://<your-public-ip&gt;:9080/worklightconsole).
  2. Click Add new app or adapter.  Select the CloudantAdapter.adapter file in the bin directory of your MobileFirst project.
  3. Repeat the process for the APIiOS-iOSnative-1.0.wlapp application file.

Update the native iOS application

The application itself needs to be updated so that it will look in the right place for the MFP server.

  1. Open the Xcode project.
  2. Update the references
    1. If the worklight.plist and WorklightAPI artifacts were added to the project by NOT checking “Copy items if needed”, then there is nothing else to do before rebuilding.  The artifacts in Xcode are links to the actual artifacts that were already updated.
    2. If copies of these artifacts were made, delete them now.  Drag-n-drop the new worklight.plist and WorklightAPI artifacts onto the Xcode Project Navigator to add them.
  3. Build and run the app on a device.  The application is now hosted on an MFP server in the cloud!

Again to be clear, the Getting Started Image is not intended for production workloads.  I’ll take a look at building a custom container that can be used for production in a future post.  But the Getting Started Image did enable me to deploy my existing app to a functional MFP server running in the cloud on IBM Bluemix in about 10 or 15 minutes.  When you compare that with the thought of standing up a server, installing MFP Server, creating a runtime, deploying the adapter, deploying the application and configuring the client app, it’s pretty compelling.

My Internet of Things and MobileFirst adventure – Part 8: Build the mobile app

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Finally, we come to creating the mobile app that will tie this all together.  This is going to involve a lot of steps so I will cover the basic outline of what I did without getting into every line of code.  You can download all the source files for this project (including the Node-RED flows) from IBM Bluemix DevOps Services if you want to see the details.

But at a high level, here’s what needs to be done:

  1. Add the IBM Mobile Push service to the Bluemix application.  This will manage the push notifications with APNS (Apple Push Notification Service).
  2. Add nodes to the Bluemix Node-RED flow to initiate a push request when it receives the visitorAlert event.  This is how Bluemix will tell the mobile app that the doorbell was pressed.
  3. Create a MobileFirst project.  This will house an adapter that will retrieve the file from Cloudant as well as generate the SDK library the iOS app will use.
  4. Create a new Xcode project to house the native iOS app
  5. Add the MobileFirst SDK to the Xcode project.
  6. Add the Bluemix SDK to the project.
  7. Write code

Add the IBM Mobile Push service to the Bluemix application

  1. Open the Bluemix application.
  2. Click ENABLE APP FOR MOBILE and then YES and then CONTINUE.
  3. Click ADD A SERVICE OR API  and select Push from the Mobile category, then click USE and then RESTAGE.  I should point out that there is a Push iOS 8 service as well.  I chose not to use it and stuck with the older Push service.  Part of the reason was that Push iOS 8 depends on the newer Advanced Mobile Access and I already knew using it would require customizing my Bluemix Node-RED instance to support it.  I took the path of least resistance, but at some point, I will need to adopt AMA to keep up with the times.
  4. At the top of the application’s overview page, you will now see the app’s registration credentials.  You will need these later to tell the mobile app how to connect to Bluemix.

Configuring Apple Push is not for the faint of heart.  First of all, it requires you to have an Apple Developer license.  You can buy a personal license for $99 per year if you don’t have access to a corporate license.  Once you have a license, you must use the Apple Developer Portal to create a Device ID, an App ID and a Provisioning Profile as well as an SSL certificate that must be used by any application that wants to request APNS to send a push notification.  That process is out of scope for this blog, but you can read about it in the iOS Developer Library.

Once you have your SSL key, you will need to register it with the IBM Bluemix Push Service.

  1. Go to your Bluemix application and click the Push service.
  2. Select the Service Mode Sandbox or Production, depending on if you have a Development or Production SSL key from Apple.

    Bluemix Push Dashboard

    Bluemix Push Dashboard

  3. Click EDIT under APNS.
  4. Upload your p12 SSL key and enter the password for it.

It is possible to use the IBM Push dashboard to send out test notifications.  This can be helpful in debugging the process.

Bluemix Push Notification tab

Bluemix Push Notification tab

Add nodes to Bluemix Node-RED to initiate Push notifications

Node-RED on Bluemix includes nodes for IBM Push.  However, I was not able to get them to work as expected.  So I chose to go about this through brute force with HTTP nodes.  This has the advantage of demonstrating more of the details of how the Bluemix Push SDKs work, but I should really revisit this someday.

  1. Open your Bluemix Node-Red flow editor.
  2. Add a function node.  Its input should be connected to the output of the visitor Alert Event IBM IoT In node.  Name it send notification.
    1. Add the following code.  Yes, I am cheating.  the picture filename is not a URL, but I got lazy and decided to use the URL element to pass it.
      var newmsg = {};
      newmsg.headers = { 
          "IBM-Application-Secret" : "<your-secret>" };
      newmsg.payload = {
          "message" : {
              "alert"    : "Someone is at the door!  " +
                     "Would you like to see who it is?",
              "url"      : msg.payload.pictureFilename
          },
          "settings" : {
              "apns"   : {
                  "sound" : "doorbell/Doorbell.mp3"
              }
          },
          "target" : {
              "platforms" : ['A']
              }
      };
        
      return newmsg;
      
    2. Replace <your-secret> with the value of your application secret code from the application’s overview page.
  3. Add an http request function node (note – this is not an http input nor http output node).  Its input should be connected to the output of the send notification function node.
    1. Set its method to POST.
    2. Set the URL to https://mobile.ng.bluemix.net:443/push/v1/apps/<your-appId>/messages where <your-appId> is the application appId found on the overview page.
    3. Set the return type to UTF-8 String.
  4. Add debug nodes if you want.

    Final Bluemix Node-RED flow

    Final Bluemix Node-RED flow

Create a MobileFirst project

I’m going to leverage the IBM MobileFirst Platform Foundation for this mobile application.  This is, of course, not necessary to use Bluemix services.  You can access Bluemix services directly from an iOS native app.  I’m not going to demonstrate them in this tutorial, but the MobileFirst Foundation provides a long list of capabilities that make developing applications easier and more secure.  What it does mean is that there will be a server-side component called an adapter that will do the actual Cloudant communication.  The iOS app will use the MobileFirst SDK to invoke routines on the adapter.

To create the MobileFirst project, you need to have either the MobileFirst Studio extension to Eclipse or the MobileFirst command line interface installed.  I’m going to be using the command line interface here.

  1. Create a new project from the command line with ‘mfp create DoorbellIOSNative‘.
  2. cd DoorbellIOSNative
  3. Start the mfp Liberty server with ‘mfp start‘.
  4. Create an adapter that will retrieve the Cloudant data with ‘mfp add adapter CloudantAdapter --type http‘.
    1. Implement the adapter logic.  The simplest way to do that is to copy the CloudantAdapter.xml and CloudantAdapter-impl.js files from this MobileFirst tutorial.
    2. Be sure to edit the domain, username, and password values in the xml file so they reflect the values from your Bluemix overview page.
  5. Add a new iOS native API using $ mfp add api APIiOS -e ios.  There are two configuration files you need to edit in the project’s /apps/APIiOS folder.  Use this MobileFirst tutorial as a guide:
    1. The worklight.plist file contains several settings that your iOS app will need to know in order to connect to the MobileFirst server which hosts the adapter.
    2. The application-descriptor.xml file defines the application name, bundleID and version for the MobileFirst server.
  6. Once you have finished configuring these files, deploy the project to the MobileFirst server with ‘mfp push‘.

Create an Xcode project

The Xcode project will contain your iOS native app source code.  Create a blank project and create an app with a UIImageView to contain the picture.  If you are not that comfortable coding, you can download my project from IBM Bluemix DevOps Services.  Developing the app UI is not really the focus here, but rather showing how you configure the project to access the MobileFirst adapter and IBM Push from Bluemix.

Add the MobileFirst SDK to the Xcode project

When you created the MobileFirst project above, two things were created that you will need to add to your Xcode project – the worklight.plist file and the WorklightAPI folder.  You also need to add several frameworks to the Xcode project to get access to the libraries you will need.  The IBM MobileFirst Platform Foundation Knowledge Center does a good job of explaining how to do this.

Add the Bluemix SDK to the project

The SDKs you need for IBM Push are included with a bundle of SDKs for Bluemix.  You can download them from the IBM Bluemix Docs.  The only tricky part here is that these libraries are Objective-C libraries.  Since I coded my iOS app in Swift, I had to create an Objective-C Bridging Header.  This is a really easy way to expose Objective-C code to Swift applications.

Write code

There are multiple places in the AppDelegate and ViewController where I needed to add custom code.  The two main functions I needed to deal with were configuring the app to receive and handle push notifications and then to use the MobileFirst adapter to retrieve the image.

Push notification configuration

didFinishLaunchingWithOptions

This is sort of the main routine for an iOS application.  It is invoked when the app launches.  Here is where you need to have the app register for push notifications with the following code:

        // ========================================
        //  Register for push notifications
        // ========================================
        // Check to see if this is an iOS 8 device.
        let iOS8 = floor(NSFoundationVersionNumber) > floor(NSFoundationVersionNumber_iOS_7_1)
        if iOS8 {
            // Register for push in iOS 8
            let settings = UIUserNotificationSettings(forTypes: 
                UIUserNotificationType.Alert | 
                UIUserNotificationType.Badge | 
                UIUserNotificationType.Sound, categories: nil)
            UIApplication.sharedApplication().registerUserNotificationSettings(settings)
            UIApplication.sharedApplication().registerForRemoteNotifications()
        } else {
            // Register for push in iOS 7
            UIApplication.sharedApplication().registerForRemoteNotificationTypes(
                UIRemoteNotificationType.Badge | 
                UIRemoteNotificationType.Sound | 
                UIRemoteNotificationType.Alert)
        }

Notice that the way you register for push changed with iOS 8 so the code here first determines the OS level, then does what it needs to do.

didRegisterForRemoteNotificationsWithDeviceToken

This method is invoked by the framework when the app has successfully registered with APNS.  Here is where you want to initialize the Bluemix and Push SDKs.  You would replace the placeholders with the route, appId, and appSecret from your Bluemix application.

        // Initialize the connection to Bluemix services
        IBMBluemix.initializeWithApplicationId(
            "<your-app-id>",
            andApplicationSecret: "<your-app-secret>",
            andApplicationRoute: "<your-app-route>")
        
        pushService = IBMPush.initializeService()
        if (pushService != nil)  {
            var push = pushService!
            push.registerDevice("testalias",
                withConsumerId: "testconsumerId",
                withDeviceToken: self.myToken).continueWithBlock{task in
            
                if(task.error() != nil) {
                    println("IBM Push Registration Failure...")
                    println(task.error().description)
                    
                } else {
                    println("IBM Push Registration Success...")
                }
                return nil
            }
        } else {
            println("Push service is nil")
        }

didReceiveRemoteNotification

This method is invoked by the framework whenever a push notification is received.  The payload of the push notification is passed in in the userInfo object.  In a series of ‘if let’ statements, I extract the bits of information I need.  Then I determine if the app was in the background or inactive.  If so, that means the user already saw the notice, read it and chose to invoke it by tapping on it.  In that case, go straight to the code that gets the picture from Cloudant and loads it into the UIImageView.  If the app was in the foreground, create a message alert and only load the image if the user taps OK.

        if let aps = userInfo["aps"] as? NSDictionary {
            if let alert = aps["alert"] as? NSDictionary {
                if let fileName = userInfo["URL"] as? NSString {
                    if let message = alert["body"] as? NSString {
                        if let sound = aps["sound"] as? NSString {
                            if (application.applicationState == UIApplicationState.Inactive ||
                                application.applicationState == UIApplicationState.Background) {
                                    println("I was asleep!")
                                    self.getPicture(fileName)
                                    
                            } else {
                                var noticeAlert = UIAlertController(
                                    title: "Doorbell",
                                    message: message as String,
                                    preferredStyle: UIAlertControllerStyle.Alert)
                                
                                noticeAlert.addAction(UIAlertAction(
                                    title: "Ok",
                                    style: .Default,
                                    handler: { (action: UIAlertAction!) in
                                        println("User wants to see who's at the door")
                                        println(fileName)
                                        self.getPicture(fileName)
                                }))
                                noticeAlert.addAction(UIAlertAction(
                                    title: "Cancel",
                                    style: .Default,
                                    handler: { (action: UIAlertAction!) in
                                        println("Handle Cancel Logic here")
                                }))
                                
                                let fileURL:NSURL = NSBundle.mainBundle()
                                    .URLForResource(
                                        "Doorbell", 
                                        withExtension: "mp3")!
                                
                                var error: NSError?
                                self.avPlayer = AVAudioPlayer(
                                    contentsOfURL: fileURL, 
                                    error: &error)
                                if avPlayer == nil {
                                    if let e = error {
                                        println(e.localizedDescription)
                                    }
                                }
                                
                                self.avPlayer?.play()
                                
                                // Display the dialog
                                self.window?.rootViewController?
                                    .presentViewController(
                                        noticeAlert, 
                                        animated: true, 
                                        completion: nil)   
                            }   
                        }
                    }
                }
            }
        }

Retrieve image from Cloudant

didFinishLaunchingWithOptions

When the app first starts up, connect to the MobileFirst Platform Server.  I’m leaving out some details here, but basically you need to call wlConnectWithDelegate.  That method takes a listener parameter, but that listener is really trivial in my case.

        let connectListener = MyConnectListener()
        WLClient.sharedInstance().wlConnectWithDelegate(connectListener)

getpicture

This routine, called when a user chooses to see the picture through the push notifications, will use the MobileFirst adapter procedures to search, then retrieve the image from Cloudant.

        // Search Cloudant for the document with the given fileName
        let searchRequest = WLResourceRequest(
            URL: NSURL(string: "/adapters/CloudantAdapter/search"), 
            method: WLHttpMethodGet)
        let queryValue = "['pictures','ddoc','pictures',10,true,'fileName:\"" + 
            (fileName as String) + "\"']"
        searchRequest.setQueryParameterValue(
            queryValue,
            forName: "params")
        searchRequest.sendWithCompletionHandler { 
           (WLResponse response, NSError error) -> Void in
            if(error != nil){
                println("Invocation failure. ")
                println(error.description)
            }
            else if(response != nil){
                let jsonResponse = response.responseJSON
                if let rows = jsonResponse["rows"] as AnyObject? as? NSArray {
                    if (rows.count > 0) {
                        if let row = rows[0] as AnyObject? as? Dictionary<String,AnyObject> {
                            if let doc = row["doc"] as AnyObject? as? Dictionary<String,AnyObject> {
                                if let payload = doc["payload"] as AnyObject? as? Dictionary<String,AnyObject> {
                                    var base64String : String = payload["value"] as! String
                                    
                                    // Strip off prefix since UIImage doesn't seem to want it
                                    let prefixIndex = base64String.rangeOfString("base64,")?.endIndex
                                    base64String = base64String.substringWithRange(
                                        Range<String.Index>(
                                            start: prefixIndex!, 
                                            end: base64String.endIndex))
                                    // Strip out any newline (\n) characters
                                    base64String = base64String.stringByReplacingOccurrencesOfString(
                                        "\n", 
                                        withString: "")
                                    
                                    // Convert to NSData
                                    let imageData = NSData(
                                        base64EncodedString: base64String, 
                                        options: NSDataBase64DecodingOptions.IgnoreUnknownCharacters)
                                    
                                    // Create an image from the data
                                    let image = UIImage(data: imageData!)
                                    
                                    // Stuff it into the imageView of the ViewController
                                    AppDelegate.vc!.updatePicture(image!)
                                    
                                }
                            }
                        }
                    }
                }
            }
        }

Testing

With all this in place, I build the app in Xcode and deploy it to my iPhone (remember, it must be a physical device because APNS doesn’t work with a simulator).  My app screen looks like this:

Doorbell App - Initial Screen

Doorbell App – Initial Screen

I walk up to my door and press the doorbell button.  The Node-RED flow on the Raspberry Pi takes a picture, stores it on the local file system and sends an MQTT message to the IBM Internet of Things Foundation broker.  The Node-RED flow running on Bluemix in the cloud receives the message and tells the Raspberry Pi to upload the picture to the Bluemix Cloudant database.  At the same time, Node-RED uses the IBM Push service, also running on Bluemix, to send a remote notification to the app on my phone.  The app receives the push notification and presents an alert to me.

Doorbell App - Push Notification

Doorbell App – Push Notification

I tap Ok to see who is at the door.  The iOS app uses the IBM MobileFirst SDK to invoke adapter routines to first search for, then retrieve, the picture from the Bluemix Cloudant database.  It then pops that picture onto my mobile screen so I can see who is ringing my doorbell, even if I am hundreds of miles from home and can’t do a thing about it.

Doorbell App - Someone at the Door

Doorbell App – Someone at the Door

Conclusion

Well, that about does it.  This has been quite an adventure.  Clearly this isn’t an app you would put into production.  I have over-engineered several areas and it is far more complicated than a doorbell needs to be.  But we did get a chance to explore several technologies including:

  • Setting up a Raspberry Pi with an external button input and a camera
  • Node-RED visual editor for controlling and interacting with the Internet of Things
  • Bluemix applications and services including Cloudant Databases, Node.js runtimes and mobile Push.
  • Registering and app with Apple Push Notification Service.
  • IBM MobileFirst Platform Foundation for enterprise security and access to Systems of Record such as Cloudant databases
  • Using Xcode to create a native iOS application that receives push notifications from IBM Bluemix Push and retrieves data from Bluemix Cloudant datastores.

What’s next?  Well, IBM just released MobileFirst Platform Foundation version 7.1 which supports deploying the MobileFirst server in a container in Bluemix – built on Docker technology.  Maybe I will see if I can eliminate my local laptop entirely and have everything except the Raspberry Pi in the cloud!

My Internet of Things and MobileFirst adventure – Part 7a: Further elaboration…

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Well, I have been giving some more thought to the decisions I made in my last post and I think I need to reconsider how I am going to transfer the picture from the Raspberry Pi to the mobile app.  After I discovered the payload limits on the IBM IoT Foundation broker, I decided to push on with MQTT by using the Mosquitto test server.  Now, I know this could work.  I could have the mobile app use something like the Eclipse Paho project Javascript libraries to connect and subscribe directly to the broker topic to receive the picture.  But the more I think about it, the more convoluted that sounds. OK, yes, this whole project is a bit contrived – there is no real reason to involve all these moving parts to build a picture-taking doorbell app, but using a second MQTT broker just seems really unnatural.  If I have to switch away from IBM IoT Foundation to transfer the picture, I just as well give myself the opportunity to involve something completely new and different, right?

That’s when I thought of Cloudant.  IBM Cloudant is a NoSQL database as a service (DBaaS) that also runs in Bluemix.  There are Cloudant nodes for Node-RED that I could run directly on the Raspberry Pi (I wouldn’t even need to send the picture file to Node-RED on Bluemix).  I could use the Cloudant SDK’s to read the picture from a mobile app directly or from a MobileFirst adapter.  With this new concept in mind, I reworked my architecture diagram a bit.

Revised Architecture

Revised Architecture

The red line is the new connection.  Node-RED on Bluemix actually uses a Cloudant database to store flows, etc., but I didn’t think that was all that relevant, so I left that line off the diagram.

To reverse my Mosquitto decision and move to the Cloudant approach, I need to do three things:

  1. Create a Cloudant database on Bluemix to store the picture.
  2. Modify the Raspberry Pi Node-RED flow to send the picture to Cloudant instead of Mosquitto.
  3. Cleanup the Bluemix Node-RED flow.

Create a Cloudant database

I could create a whole new Cloudant database service on Bluemix to store the pictures coming from Raspberry Pi, but the Bluemix application I created from the Node-RED Starter boilerplate back in Part 6 already has a Cloudant service bound to it so I decided to just create a new database within that service.

  1. From the Bluemix Dashboard, click your Node-RED application.
  2. Note the Cloudant NoSQL DB service tile.  If you click show credentials on that tile, you will see the creds you will need later on to enable the Raspberry Pi (and other clients) to connect to the database.
  3. Click on the Cloudant NoSQL DB service tile, then click Launch.  This opens up the Cloudant database dashboard.  There is already one database running in this service – “nodered”.  This is the database that Node-RED uses to store its artifacts so let’s not muck with that.
  4. Click Add new database and name it pictures.

Later, we are going to need to be able to search the Cloudant documents by the “fileName” attribute.  Searching in Cloudant means you need to create a design document which defines the index by which you want to search.

  1. Click the + next to All Design Docs and choose New Search Index.  Save to design document New Document.  Index name pictures.  Add the following code as the Search index function:
    function(doc){
       if(doc.payload.fileName){
           index("fileName", 
               doc.payload.fileName, {"store": "yes"});
       }
    }
  2. Leave the other settings at their defaults and click Save & Build index.

Modify Node-RED flow on Raspberry Pi to use Cloudant

The Raspberry Pi Node-RED doesn’t include nodes for Cloudant out of the box, but we can add node-red-node-cf-cloudant similar to the way we added node-red-contrib-gpio at the end of Part 3.

  1. Open an ssh terminal session to the Raspberry Pi.
  2. Issue the commands
    cd ~/.node-red
    npm install node-red-node-cf-cloudant
  3. Stop and restart node-RED.  Now you will see Cloudant storage nodes in the palette.

    Cloudant nodes in the palette

    Cloudant nodes in the palette

  4. Remove the mqtt Picture queue node from the Send Picture flow.
  5. Drag a cloudant out storage node onto the flow.  Its input should be the output of the createJSON node.  Configure the cloudant out node.
    1. Choose External service
    2. Configure a new Server.
      1. Host, Username and Password are the connection credentials from the Bluemix service I said you would need.
      2. Name is just a name you give to the server configuration so you can reference it again if needed.
    3. Database should be set to pictures
    4. Leave Operation as insert
    5. Name the node insertPicture.
  6. Deploy the flow.  You will get a warning that your test.mosquitto.org configuration node is unused.  Node-RED keeps connection configurations like this around just in case you want to reuse them.  Click Confirm deploy for now.  If you want to remove these configurations, you can select configuration nodes from the hamburger menu on the right.
  7. Test it by pressing the doorbell button.  The Process Doorbell flow should send an IBM IoT visitorAlert event message to Node-RED on Bluemix which should, in turn, send an IBM IoT sendPicture command message back to Node-RED on Raspberry Pi.  This will eventually create a Base64 encoded image string, package it up with the filename, length and status into a payload that gets sent to Cloudant in Bluemix by the insertPicture node.  See if it worked by going to the Bluemix Cloudant dashboard.  You should see one Document in the pictures database that looks something like this.

Cloudant Document

Clean up the Bluemix Node-RED flow

Now let’s remove the MQTT node that receives the picture from the Bluemix Node-RED flow.

  1. Open Node-RED on Bluemix.
  2. Select and delete the MQTT Picture queue node and the debug node attached to it.
  3. Deploy the flow.

Ok, we are finally ready to create a mobile app and wire it up to Bluemix Push and Cloudant to complete the circle.  That’s the next post.

Part 6: Enabling Push Notifications

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

When we left of in the last post, I had a Raspberry Pi that is capable of taking a picture using its camera and storing that picture in a file.  It can also sense that a physical pushbutton has been pressed – a doorbell.  I also have my Bluemix services configured to manage Push notification requests and to broker IoT messages.  Let’s start wiring things together by configuring Push notifications.

Build an iOS app that can receive push notifications

Note: This app evolved and changed a lot as I kept changing my mind on how I wanted to do things.  Again, you may not see why I decided to implement things the way I did until later.  Just play along for now.

I’m writing a Native iOS app in Swift.  I’ll share the entire app in GitHub at the end of this series, but for now I will just show you code snippets of the relevant parts.

To receive remote push notifications, mobile apps must register to receive them. There are really two parts to this, registering with the Apple Push Notification Service to enable the app to receive notifications and registering with IBM Push Notifications service to enable it to send the requests on behalf of the Raspberry Pi app.

Register for remote notifications with APNS

The following code defines two notification actions and registers them with APNS to enable the app to display alert boxes with these buttons, play sounds and update badge counts in response to remote notifications:


private func registerUserNotificationSettings() {
// Actionable Notifications
let videoAction = UIMutableUserNotificationAction()
videoAction.title = NSLocalizedString("Video", comment: "Launch Video")
videoAction.identifier = "video"
videoAction.activationMode = UIUserNotificationActivationMode.Foreground
videoAction.authenticationRequired = false
let pictureAction = UIMutableUserNotificationAction()
pictureAction.title = NSLocalizedString("Picture", comment: "Launch Picture")
pictureAction.identifier = "picture"
pictureAction.activationMode = UIUserNotificationActivationMode.Foreground
pictureAction.authenticationRequired = false
let actionCategory = UIMutableUserNotificationCategory()
actionCategory.setActions([videoAction, pictureAction],
forContext: UIUserNotificationActionContext.Default)
actionCategory.identifier = "doorbell"
let categories = NSSet(array: [actionCategory])
let settings = UIUserNotificationSettings(forTypes: [.Alert, .Badge, .Sound], categories: categories as? Set<UIUserNotificationCategory>)
UIApplication.sharedApplication().registerUserNotificationSettings(settings)
UIApplication.sharedApplication().registerForRemoteNotifications()
}

You call this procedure from the AppDelegate’s didFinishLaunchingWithOptions method.

Register the device token with IBM Push Notifications service

You know registration with APNS was successful when the system calls your AppDelegate’s didRegisterForRemoteNotificationsWithDeviceToken method, so here’s where you register with IBM Push Notification service, passing along the device token.


pushService?.registerDeviceToken(token, completionHandler: { (response, error) -> Void in
if error != nil {
self.logger.logErrorWithMessages("IBM Push Registration Failure…")
return
}
})

Again, realize that this is not complete code, but rather just the important parts.

Invoke a Push notification from the Raspberry Pi

I’m going to change up the Node-RED flows that I created in Part 4.  The flow I had created caused the Pi to capture a picture as soon as the doorbell was pressed.  I am going to have Node-RED only send a push notification to the app when the doorbell is pressed.  The picture won’t actually be taken until the user requests it in response to receiving the Push notification on the iPhone app.  No, I don’t think this is the best design either, but again, I am using this as an opportunity to learn as much as I can about IoT, so this design will enable me to eventually send MQTT commands from the app to the Pi.

There is a Node-RED node package that sends push notification requests to the IBM Push Notifications service.  But what I found is that this node only accepts a simple string to use as the message alert string.  The IBM Push Notifications service can optionally accept a JSON object that defines a number of other items.  I would like to provide some of these other items in my Push message so I am going old school and using an HTTP node instead.  This additionally illustrates how the IBM Push Notifications service can be invoked with a simple HTTP POST.

Raspberry Pi Node-RED flow

I modified my Node-RED flow to the following:

Node-RED flow for processing the doorbell

Node-RED flow for processing the doorbell

The “send notification” and “push” nodes are new and require a little explanation.

HTTP Request node

I did not customize this node at all, with the exception of naming it.  The address, headers and content of the message will be provided in the incoming message from the function node.

Send Notification function node


// We only want to take pictures when
// the button is pressed, not when it is released.
if (msg.payload !== 0) {
var newmsg = {};
newmsg.method = "POST";
newmsg.url = "https://mobile.ng.bluemix.net/imfpush/v1/apps/your-appID-here/messages&quot;;
newmsg.headers = { "appSecret" : "your-app-secret-here" };
newmsg.payload = {
"message" : {
"alert" : "Someone is at the door! " +
"Would you like to see who it is?"
},
"settings" : {
"apns" : {
"sound" : "Doorbell.caf",
"category" : "doorbell"
}
},
"target" : {
"platforms" : ['A']
}
};
return newmsg;
}

  • Line #3:  The pushbutton will send a message (“1”) when pressed and a message (“0”) when released.  Also, in order to simulate a button press for testing, I added a timestamp Inject node as an input to the node as well.  I only want the node to be triggered when the payload is NOT 0 – meaning either the pushbutton was pressed (not released) or the message is a timestamp.
  • Line #5:  This request will use the HTTP POST verb
  • Line #6:  The URL address of the request.  You will need to replace “your-appID-here” with the AppGUID from your Bluemix application
  • Line #7:  The appSecret HTTP header value needs to be the appSecret from your Bluemix IBM Push Notifications service.
  • Line #8 – #22:  This defines the payload message that will be sent to the IBM Push Notifications service.  For a full explanation, click Model to the right of body at https://mobile.ng.bluemix.net/imfpushrestapidocs/#!/messages/post_apps_applicationId_messages.

Testing

Once Deployed, clicking on the timestamp node should cause the function node to create a message and the push node to send it.  You can confirm by verifying the output of the msg node in the debug view.  You should see a statusCode of 202 and a header x-backside-transport value of “OK OK”.

Assuming you have added the code above for registering for remote notifications and registering the device token with IMFPush to even a simple iOS app, you should also see the notification appear on your device.

Diagnosing problems with Push notifications is difficult.  You either get the notification on the mobile device or you don’t.  There are no diagnostic messages returned from APNS or log files created.  The most likely cause of a push not finding its way to your phone is that the SSL certificate wasn’t created or installed properly.

Next

The next thing is to set up the basics required for the app to request a picture and the Pi to respond by sending it to the app via IoT Foundation.

That will be the next blog post in this series.

Part 5: Set up the Bluemix application environment

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Today I am going to configure my Bluemix environment that will talk to my Raspberry Pi.  I already have a Bluemix account, but you can go to https://ibm.biz/IBM-Bluemix and get your own free account.

Create a Bluemix application

Bluemix provides a number of boilerplates.  Boilerplates are packages of services already bundled together.  This is usually the easiest way to get started.

  1. Click Catalog and then MobileFirst Services Starter to create a new application based on the boilerplate.
  2. Enter a name for your app.  It has to be unique, so consider including your initials or something in the name.  Click Create.
  3. Bluemix will start cranking through the creation process.  First your application will be staged, then created.  Once it says your app is running, the creation process is done.

Add the IBM Internet of Things Foundation service

IBM Bluemix provides a service called Internet of Things Foundation.  This is sort of a registry service and a message broker all in one.  You can register your device in the Internet of Things, then communicate with it through IoT nodes in Node-RED.

  1. From your Bluemix App’s Overview page, click Add a service or API.  Search or scroll to the bottom and select Internet of Things Foundation.
  2. Your app will need to be restaged to add the service.  Wait until App Health indicates your application is running again.

Add the Raspberry Pi to the Internet of Things Foundation

  1. Once the app is running again, click the Internet of Things Foundation tile.
  2. Click Launch Dashboard.
  3. From the Overview tab, click Add a device.
  4. Create a device type.  This can be anything you want, say myPi.
  5. You can optionally choose the attributes you want to assigned for each device of this type.  You can skip this for now if you want.  If you choose to add attributes, you will be asked to supply defaults on the next screen.  You can also add additional metadata in JSON format.
  6. After entering all that information about the device type, you will return to the Add Device wizard.  The only bit of information here that is absolutely required is the Device ID.  That is the device’s MAC address.  To find it, go back to your Raspberry Pi command line and type ifconfig eth0.  The output will look something like this:

    ifconfig output

    ifconfig output

  7. Type the string of characters following the HWaddr (leaving out the colons) into the Bluemix IoT dialog.  In my example, it would look like this:

    IoT MAC Address entry

    ifconfig output

  8. Click through the rest of the screens until you get to the page displaying Your Device Credentials.  You will be shown a box with the Authentication Token.  Copy this down immediately!  You will not be given a second chance to see it!

Configuring the IBM Push Notifications service

Next, I’ll configure the IBM Push Notifications service.  This is one of those cases where in reality, I came back and did this later, but since we are in Bluemix now, let’s set it up here.

The IBM Push Notifications service is, in a sense, a notification broker.  Applications that want to sent push notifications to devices (in my case, the Raspberry Pi), will send message requests to the IBM Push Notifications service via HTTP REST commands.  The IBM Push Notifications service will, in turn, dispatch the request to the notification services for the appropriate platform.  In my case, I an going to be writing the app on iOS, so the IBM Push Notifications service will invoke the APNS (Apple Push Notification Service), which will send a remote notification to my device.  In order to receive the notification, the device app software will need to register with the IBM Push Notifications service.

Configuring Apple Push is not for the faint of heart.  First of all, it requires you to have an Apple Developer license.  You can buy a personal license for $99 per year if you don’t have access to a corporate license.  Once you have a license, you must use the Apple Developer Portal to create a Device ID, an App ID and a Provisioning Profile as well as an SSL certificate that must be used by any application that wants to request APNS to send a push notification.  That process is out of scope for this blog, but you can read about it in the iOS Developer Library.

Once you have your SSL key, you will need to register it with the IBM Bluemix Push Service.

    1. Go to your Bluemix application Overview and click the IBM Push Notifications service.
    2. Click Setup Push.
Bluemix Push Dashboard

Bluemix Push Dashboard

  1. Click Choose file under Apple Push Certificate.  (Note that the process would be similar for Google Cloud Messaging for Android devices.)
  2. Upload your p12 SSL key and enter the password for it.

It is possible to use the IBM Push Notifications dashboard to send out test notifications.  This can be helpful in debugging the process.

That takes care of configuring the Bluemix services in the cloud.  You have two primary services:

Internet of Things Foundation

  • Acts as a broker for MQTT messages between IoT Applications and Devices
  • Maintains registration information for all your Devices

IBM Push Notifications

  • Acts as a broker for remote notifications between requesting applications and the vendor cloud messaging services.
  • Provides a manual Push testing interface

That takes care of the system setup tasks.  In the next post, I will start implementing my solution and configure my Raspberry Pi Node-RED flows to send Push Notifications.  In a later post, I will enable my mobile application to request a picture from the Raspberry Pi and the Raspberry Pi to return the picture using the IoT Foundation.

Part 4: Adding in the camera

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Now I have the Raspberry Pi running Node-RED and can control a pushbutton and LED.  Cool.  And then, the mail carrier came!  My camera module arrived!

Camera module installation is pretty easy.  Just follow the Raspberry Pi Documentation.  There is an easy command-line utility called raspistill that I used to test out the camera.

Now there is no out-of-the-box node to control the camera module from Node-RED.  I never found any Googling around the internet either.  But that’s OK – this gave me an opportunity to see how extensible Node-RED is.  I would have to integrate the Raspicam Node.js library into function nodes.

Making Raspicam accessible in Node-RED

First, I had to install the raspicam module into Node.js.  Npm package manager makes that easy enough, but again, make sure you are in the right directory.

cd ~/.node-red
npm install raspicam

Now, that makes raspicam accessible in Node.js, but there is an additional step required to make it accessible within Node-RED.  See the Global Context section in the Node-RED Documentation.  In ~/.node-red/settings.js, I added

functionGlobalContext: {
        RaspiCam:require('raspicam')
},

Bounce Node-RED and you should be able to access raspicam commands inside Node-RED modules like

 var camera = new context.global.RaspiCam( opts );

Adding a function node

Now I’m ready to add some code to control the camera.  This is done with a function node.

  1. Drag a function node onto the canvas and connect it like this
    Function Node added to test flow

    Function Node added to test flow

    A couple things to note here.  First, you can have more than one node wired to a node’s input.  In this case, both the Pin 12 node and the function node will send information to the msg.payload debug node.  We already saw this in the previous post because the Pin 12 node and timestamp injection node send output to the trigger node.  Secondly, you can have more that one node take input from a node’s output.  In this case, the trigger node will pulse the LED, but it will also send the trigger to the function node, which we will use to take a picture.

  2. Double-click the function node.  Name it Take Picture.  Paste the following code into the Function area:
    // We only want to take pictures when 
    // the button is pressed, not when it is released. 
    if (msg.payload == 1) { 
        var encoding = "png"; 
        var currTime = new Date().getTime();
    
        // Use the current timestamp to ensure
        // the picture filename is unique.
        var pictureFilename = "/home/pi/pictures/" + currTime + "." + encoding;
        var opts = {
            mode: "photo",
            encoding: encoding,
            quality: 10,
            width: 250,
            height: 250,
            output: pictureFilename,
            timeout: 1};
    
        // Use the global RaspiCam to create a camera object.
        var camera = new context.global.RaspiCam( opts ); 
    
        // Take a picture
        var process_id = camera.start( opts ); 
    
        // Send the file name to the next node as a payload.
        return {payload: JSON.stringify(
            {pictureFilename : pictureFilename}) };
    }
  3. Deploy the flow.  You see two messages in the debug view. The first is the message from the Pin 12 node.  The second is the message created by the “Take Picture” node.  That node has told the camera to take a picture (if you were paying attention, you would have seen the red light on the camera module flash) and has sent a message with the filename to the debug node.
    Deploy messages

    Deploy messages

    There is a problem here.  The Pin 12 node is configured with “Read initial state of pin on deploy/restart?” checked.  This is causing my flow to trigger and a picture to be taken when I don’t want it to.  I’ll fix that before verifying the camera worked.

  4. I’ll double-click the Pin 12 node and deselect “Read initial state of pin on deploy/restart?” and Deploy again.  This time I get no new debug messages.
  5. Press the pushbutton on the breadboard.  The camera light will blink as it takes a picture and the LED connected to Pin 22 will pulse for 2 seconds.
  6. I’ll open up a VNC session, find the picture file in /home/pi/pictures with the name in the debug message and open it.  The picture isn’t all that exciting.  My Pi camera is sitting on my desk, pointing toward my laptop monitor.  But it proves that things are working.

    Viewing picture on Pi VNC

    Viewing picture on Pi VNC

Ok, so now I have a Raspberry Pi and a Node-RED flow triggered by an external button that will take a picture using the Pi camera module and store the picture in a file on the device.  I’m making progress but so far I only really have a “thing”, not an “internet of things”.  In the next post, I’ll set up my Node-RED environment on Bluemix so I have something to communicate with.