Part 9: Apple Watch

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Ok, I know I’m getting really carried away on this thing now, but I got an Apple Watch and thought it might be kinda cool to see if I could create a Watch app for my doorbell app.

I’m going to focus on the unique challenges I ran into for this app and not get too deep into how you build a Watch app.  There are resources out there (although not as many as I had hoped) to teach you how.  One particularly good resource I purchased is a tutorials book from Ray Wenderlich called “watchOS 2 by Tutorials“.  This is definitely worth the price of admission if you are serious about learning how to build for the Watch.  One caution I will leave you with – make sure whatever resources you are using are for WatchOS 2.  There were significant changes between version 1 and version 2 and there is still quite a bit of WatchOS 1 info out there.

There are three features of the Doorbell app I would consider making available on the Watch.  Well, actually, there are really only three features in the whole Doorbell app:

  1. Push notifications when someone rings the doorbell.
  2. Displaying a picture from the Raspberry Pi
  3. Displaying video from the Raspberry Pi

It didn’t take me long to see that #3 is a long shot.  There are very few video viewing apps for the Watch and those that do exist provide the video content as files transferred to the Watch.  I considered rearchitecting things to capture short video clips to files that could be sent to the Watch, but I pretty quickly wrote that off.  I’ll focus on Push notifications and pictures.

A Watch app is not a separate app at all.  It is really an extension of the iPhone app.  A Watch app cannot exist without its “host” iPhone app.  You get started by adding a Watch target to your existing iPhone app.

Display a picture

My plan was to put a button on a Watch interface that would request a picture from the iPhone app and would then display it in a WKInterfaceGroup or something.  Creating the UI was pretty easy – there are a lot fewer options than on iOS.

The WatchConnectivity package gives you various ways to communicate between the iPhone and the Apple Watch.  Interfaces can be immediate (sendMessage) or background (transferFile, updateApplicationContext).  My first instinct was that I wanted it immediately so I should use sendMessage.  This actually did work, but sendMessage has a size limit on the payload.  I never did fine an authoritative source that defined the specific value, but unless I really reduced the resolution on my picture, I exceeded it.

So, I took the approach to have the Watch request the picture using updateApplicationContext.  The iOS app would then send the takePicture command to the Raspberry Pi, receive the picture in packets, reassemble it, then send it to the Watch using transferFile.  This actually worked brilliantly – as long as the iOS app was in the foreground.

This is where I ran into another constraint from Apple.  The iOS app is VERY limited in what it can do when in the background.  Namely, it can only use the network for vary limited reasons.  Publishing MQTT commands doesn’t seem to be one of them.  There are other ways to do this.  You can have an iOS app do some network operations if you can justify classifying it as a VOIP or News app.  Then it can do some http operations in the background.  I’m sure that’s how Watch apps such as weather apps get regular updates from the host app.

Nonetheless, I did get something to work as long as the iOS app is active.  I can get a picture from my Raspberry Pi to the Watch.

Refresh  Requesting  Returned

Push notifications on the Apple Watch

I learned that there is an involved set of rules that determines when a Push notification shows up on the Watch and when it shows up on the iPhone, even if you have built in the ability to handle the Push on the Watch.  You don’t get to choose – iOS makes the decision.  Basically, the notification will always go to the iPhone if it is not locked and asleep.  Even if it is, the Watch must be on your wrist before iOS will decide to let the Watch present the notification.  Makes sense I guess.  Why would you want a notification to show up on your Watch if you have the iPhone in your hand and you are looking at it?

The Watch app has an ExtensionDelegate class which is analogous to the AppDelegate of the iPhone app.  You override the method handleActionWithIdentifier to handle the custom action buttons (Picture and Video) I created in the AppDelegate class.  Now when the doorbell is pressed, Node-RED on the Raspberry Pi will send a request to the IBM Push Notifications service which will in turn have APNS send a push.  If the iPhone is asleep in my pocket, I will get a Push notice on the Watch with the Picture and Video buttons.

WatchPush

The problem is, what do I do now that I have received the Push on my Watch?  Since I already found that I can’t ask the iPhone app to go get me a picture and then update it on the Watch, I am kinda stuck.  I did end up implementing Handoff so once I tap either Video or Picture on the Watch, I can slide up on the lock screen icon on my iPhone and have it take me directly to the corresponding view and refresh it.

LockScreen

So as it turns out, the Apple Watch probably isn’t the ideal platform for a visual doorbell app.  But it was an interesting experiment.  At this point, I think I have milked this adventure for all it is worth.

Conclusion

Well, it has been quite an adventure.  I can’t say that I have a terribly useful app at the end of all this, but it did give me a platform to learn a lot of stuff.

If you are interested, the full code for the project can be found in GitHub.

 

Part 8: Watching video from the Pi on the Phone

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

Let me say right off that there are dozens of alternatives available if you want to stream video from a Raspberry Pi to another device.  They all seem to have their pros and cons.  Some are really inefficient.  Others require a lot of processing and an additional app server.  Still others are very proprietary and only work on a very limited set of devices or platforms.  So, feel free to try your own approach.

I decided to go with the VLC media player.  I have known VLC as a good media player for a long time, but I had no idea it can do so many more things – in particular, act as a video streamer.

So here’s the basic plan:

  1. Configure the Raspicam to record video to a fifo file.
  2. Configure VLC on the Raspberry Pi to stream that fifo content to a URL
  3. Connect a VLC player object embedded in the iPhone app to the URL stream.

Install VLC on the Raspberry Pi

You can install the Debian image on the Raspberry Pi:

sudo apt-get install vlc

That was easy.

Add the VLC SDK to the iOS app project

There’s a Pod for that.  In fact there are multiple.  I went with MobileVLCKit.

Create the Raspberry Pi Node-RED flows

Let’s start on the Raspberry Pi side and deal with sending the command from the app later.  Let me warn you, this gets a little crazy.  There are several steps here that I probably could have done more simply in a shell script.  But I tried to stay in Node-RED as much as I could, which ended up complicating some things.  Here we go.

Start Stream Flow

Create the following flow:

 

 

I warned you.  Let’s go through the nodes:

  • startStream  Receive the IoT Foundation startStream command from the application.
  • mkfifo  Creates a fifo file on the filesystem.  This is an ‘exec’ node, which just runs the given OS command.  It returns stdout as the payload of the message, but in this case, I don’t really care about the contents of stdout.  mkfifo
  • start raspivid  Invokes the shell command “/home/pi/bin/startraspivid.sh”.  This shell script starts up raspivid recording 640×640 video at 25 frames per second.  This seemed to produce decent looking video yet kept the bandwidth requirements down.  These numbers could definitely be tweaked if needed.  raspivid is streaming to the fifo file named vidstream.  Yes, the rest of the command looks odd.  The issue was that I needed the node to launch raspivid and let it continue to run on another process yet return execution to the next node.  I tried a number of approaches and the only one that seemed to work was to background it (‘&’), but I found I also had to redirect stdout and stderr (‘ > raspividout.txt 2>&1’) or execution wouldn’t return to the node.

  • start vlc  This is similar to the raspivid node.  It invokes a bash shell script that starts up VLC (‘cvlc’).  VLC will take the vidstream fifo file as input and produce a video stream on port 8554.  I had to do the same dance with backgrounding and redirecting output as I did with the raspivid script.
    VLC has a seemingly infinite number of command line switches and arguments.  I won’t claim to have figured all this out myself.  I relied heavily on the VLC documentation and a couple blogs on the Raspberry Pi forum.

  • get IP Address  This function node figures out the IP address of the Raspberry Pi.  I need this because the iPhone app will need to know where to connect the VLC player to view the stream.  Now this brings up a significant limitation to the VLC approach – the Raspberry Pi must be visible to the mobile device on the network.  Unless you are willing to go through all the steps necessary to get your Raspberry Pi a public IP address and deal with all the security issues that will arise, that’s a pretty big limitation.  I decided I can live with the limitation that video only works while my phone is on my local Wifi network with my Raspberry Pi.

  • streamStarted  An ibmiot output node that publishes the device event streamStarted with a payload that includes the IP address of the Raspberry Pi.

Stop Stream Flow

The Stop Stream flow is invoked by receiving a stopStream IoT Foundation command from the mobile app.  It simply kills the Raspian processes for vlc and raspivid and then publishes a device event acknowledging that it did.

stopStreamFlow

Stop Stream Node-RED Flow

    • stopStream  ibmiot input node that receives the command stopStream.
    • Stop Stream  An exec node that runs the script stopstream.sh.

    • format Message  My mobile app doesn’t actually do anything with this, but I created this node to parse out the process identifiers (PIDs) of the vlc and raspivid processes that were killed and include them in the payload of the streamStopped event.

  • streamStopped  An ibmiot output node that publishes the device event streamStopped with a payload that includes the PIDs of the killed processes.

Add functionality into iOS app

Again, I won’t go into all the details here, but I want to hit the high points.  What needs to be done in the app is

  1. Add a UIView object to a view to act as the ‘drawable’ for the mediaPlayer
  2. Add a VLCMediaPlayer object.
  3. Send the startStream command through IoT Foundation
  4. Receive the IP address and have the mediaPlayer attempt to connect to it.

Add a UIView object

I decided to create a tabbed view controller, making the first tab the Picture view controller and the second the Video view controller. The Video view controler just has a UIView on the storyboard with an IBOutlet in the view controller code.

Add a VLCMediaPlayer object

With the MobileVLCKit pod added to my Xcode workspace, this is as simple as

In the view controller’s viewDidLoad method, I configure the mediaPlayer.

Send the startStream command

In the view’s viewWillAppear method, I send the startStream command.  This is really the same as sending the takePicture command. I do provide a completion handler, but this time, the handler will only be invoked once – when the streamStarted event is published by the Raspberry Pi.

Receive the IP address and connect the MediaPlayer

The callback gets invoked when the streamStarted event is published.  The payload of the streamStarted event is the IP address of the Raspberry Pi, so all that is left is to set the mediaPlayer’s media value and tell it to play.

Remember, though, that as we discussed earlier, the mediaPlayer will not be able to connect to the stream if the Pi is not visible to the mobile device. If that is the case, the mediaPlayer object throws up an alert. I could have handled that in my code with a delegate, but I decided to just let the mediaPlayer deliver the bad news.

IMG_0673

Summary

There are a lot of ways to skin this cat.  I chose to use VLC because it was relatively easy (compared to other methods I considered), but it does have the significant limitation that my Raspberry Pi has to be network-ly visible to my iPhone.  But nonetheless, while I am in the same Wifi region, it is pretty cool that when I hear the doorbell ring, I don’t even need to get up out of my recliner to see if it is a siding salesman.  The world of WALL-E is fast approaching.

So next I went a little crazy.  I got a new Apple Watch and thought I would see if I could build it into this environment in the next post.

Part 7: Requesting and Receiving a Picture

This is part of a series of posts related to My Internet of Things and MobileFirst adventure.  An index to all posts can be found at the end of the first post.

The next step is to see if we can make the iOS app request a picture from the Raspberry Pi and have it respond with one.

I stared by creating a simple iOS app that had a button to start the request process and a UIImageView to display the picture.  The UI eventually evolved a bit, but this gave me a place to start and keep it simple.

MQTT Mobile Client software

There is no iOS SDK specific to the IBM IoT Foundation.  There are, however, several MQTT pods out on cocoapods.org that could do the job.  MQTT is the protocol underlying the messaging in the IoT Foundation.  I chose to use MQTTClient.  I followed the simple instructions to install using cocoapods.  MQTTClient provides interfaces at various levels.  The simplest way to use MQTTClient is through MQTTSessionManager.

IBM IoT Foundation conventions

The IoT Foundation does layer on some extensions that are supported by conventions you must observe when using MQTT. The full IoT Foundation documentation is very helpful.

First of all, the IoT Foundation considers there to be two types of “things” in the Internet of Things: devices and applications.

Devices:

  • A device can be anything that has a connection to the internet and has data it wants to get into the cloud.
  • A device is not able to directly interact with other devices.
  • Devices are able to accept commands from applications.
  • Devices uniquely identify themselves to the IoT Foundation with an authentication token that will only be accepted for that device.
  • Devices must be registered before they can connect to the IoT Foundation.

Applications:

  • An application is anything that has a connection to the internet and wants to interact with data from devices and/or control the behaviour of those devices in some manner.
  • Applications identify themselves to the IoT Foundation with an API key and a unique application ID.
  • Applications do not need to be registered before they can connect to the IoT Foundation, however they must present a valid API key that has previously been registered.

For some reason, it took me a while to wrap my head around this.  I wanted to consider my iPhone a “device”, but in fact, it would be an “application”. For one reason, it is going to be sending commands to get information from the Pi.  Secondly, note that you don’t register applications in advance – they just provide a unique key.  This would be important if I were to do into production with my doorbell.  I can’t register every possible mobile phone in advance.

Commands:

  • Commands are the mechanism by which applications can communicate with devices. Only applications can send commands, which must be issued to specific devices.

Events:

  • Events are the mechanism by which devices publish data to the Internet of Things Foundation.

So my iOS app will be an application that will send commands and subscribe to events that will be published by my Raspberry Pi which is a device.  Got it.

Here are some other MQTT concepts and their manifestation in IoT Foundation.

Connection parameters

  • MQTT host:  The host of the IoT broker is org_id.messaging.internetofthings.ibmcloud.com where ord_id is the org assigned to your IoT Foundation service when it was created.  You can find it from the Bluemix dashboard for your application.
  • MQTT client identifier:  This is an identifier the application must provide when it connects to the IoTF.  it must be unique.  The convention is a:org_id:app_id, where ord_id is the same as above, and app_id is something unique for each app instance.  Since there will only be one app instance at a time for my simple app, I just used the Bluemix AppID for this.
  • MQTT username: Use the IoT Foundation API Key generated when you created your IoT Foundation service.
  • MQTT password: Use the IoT Foundation Authentication Token

Topic names

The IoT Foundation uses a strict convention for MQTT topics that map to device types, device IDs, commands, and events.  Reference the documentation for all the details, but here are the topic strings I used for the command I would send from the application and the event I would subscribe to:

Connecting to IoT Foundation from the Application

Again, sparing some of the details, here’s my connect() method:

Receiving the picture

First step is to send the takePicture command to the Raspberry Pi through the IoT Foundation.  The actual content of the message is irrelevant in this case.  The operation on the Pi will be invoked just by receiving the command.

The picture will come back from the Raspberry Pi as a Bas64 encoded string as the data in the pictureTaken event.  This wasn’t as easy as I expected.  Turns out that MQTT has a limit to the size of the messages it will transmit.  Who knew?  Well, I didn’t anyway.  So, as you will see shortly, I break the picture up into 3kb chunks and send them back in sequence.  So on the application side, I created a completion handler that would piece the picture back together again.  The completion handler gets called each time a packet event arrives.

The MQTTSessionManagerDelegate routine handleMessage gets invoked each time a packet arrives, which then invokes the callback.

The callback is smart enough to know when it receives the final packet and then completes the reconstruction of the image and eventually displays the picture in the UIImageView.

Servicing the command from the Raspberry Pi

There are Node-RED nodes available to communicate directly to the IoT Foundation from within Node-RED.  Follow the instructions for node-red-contrib-scx-ibmiotapp to set it up in the Raspberry Pi.

With the ibmiot nodes in place, create the following flow:

Node-RED flow that takes a picture

Some details:

  • takePicture  This is the IoT node that receives the takePicture command from the application.  Configure it with the requested credentials.  The device type and device id must match those you created for the Raspberry Pi in the IoT Foundation dashboard.  The command is “takePicture”.
    takePictureNode
  • Take Picture  This function node is pretty much the same as in an earlier blog post.  It invokes the camera through JavaScript in the Node.js environment.  It sends on a message containing the picture’s filename and timestamp.

  • delay 2 s  The camera is asynchronous so you need to give it a little time to take the picture and save it to a file before processing.
  • Base64 encode  This is an exec node which runs an operating system command.  The command value is “base64 -w0” and the msg.payload value is checked.  This means an OS command base64 will run against the filename provided by the Take Picture node.  “-w0” keeps base64 from injecting newline characters in the string.  The Base64 string is sent to the next node as the payload.
    Base64Node
  • splitIntoPackets  This function node slices the payload into an array of messages containing 3k chunks of the data.  The date, picture name, number of packets and packet index are all added to each message as well.  The array becomes a series of messages sent by the node.  The node also has an output that sends the picture name and one that sends the total number of packets.  These are just for debugging purposes.

  • pictureTaken  This ibmiot output node sends the device event “pictureTaken” for each packet.  Use the same values for authentication, device type and device id that you used for the ibmiot in node.
    PictureTakenNode

Where are we?

Ok, we covered a lot of ground in this post.  We talked a bit about MQTT and how IoT Foundation fits with it.  We looked at how the Swift code will use the MQTTClient library to send commands and receive picture packets and reassemble them.  We also looked at how the Raspberry Pi will receive the IoT command through the IoT input node, use the Raspicam Node module to take a picture, use an OS command to convert it to Base64, use a function node to packetize it, then use the IoT output node to send the packets back to the phone as events.

I didn’t show it here, but I added some code to have the iOS app invoke this whole process in response to the user’s choice from the Push notification.  So I accomplished what I had set out to do.  The Pi sends a push notification to the phone to let the user know there is someone ringing the doorbell.  The user can then choose to see a picture of the visitor by tapping Picture in the push notification.  The app will then send an MQTT command to the Pi through the IBM IoT Foundation.  The Pi then takes a picture and sends it back to the phone which displays it.

Next

I decided to go for some bonus points.  What if the user could actually watch video of the visitor?  Next post.