Thursday, 20 August 2015

Arduino Weather Station

 

Overview


A quick rummage through eBay and you soon realise how many sensors types there are for the humble Arduino so in this project lets get to work with six interesting ones.  A weather station is the perfect scenario to combine data from all of these sensors and turn it into an interesting view of your surroundings.  I've captured the readings to an SD card but there's no reason why the data couldn't be sent via a network card, Wi-Fi to a remote location anywhere on Earth!


Here's a quick radar graph focusing in on the air around the Arduino during a 24 hours period, specifically studying its quality, humidity and temperature.

The five Keyes sensors (from left to right):



  • Ambient light
  • Air Temperature and humidity
  • Air pressure 
  • Air quality
  • Soil moisture

You could keep going with a rain drop sensor for example but after five sensors and an SD card you're going to run out of I/O pins on the Ardunio Nano and Uno, but if you want to keep expanding the Arduino Mega is your way forward.


Be autonomous...


You are going to need power and lots of it with all of those sensors, I've been using the EC Technology® 22400 mAh external battery which I bought from Amazon for £20 in July 2015.




Next you are going to need somewhere to keep it out of the rain, direct sunlight which really messes up you temperature readings, its proper name is a 'louvres' or 'Stevenson Screen'.





With all that information being captured you need to store it safely, I found the most reliable way is an SD card via the Arduino Shield which simply means you can add you prototype board on top, hence the three layers in the photo earlier.  The other reason to go for the shield is the real-time clock which you need to track the date / time when the readings are made.




To conserve power I didn't add an LCD screen but opted for a single LED based set of lights.  The green LED pulses on/off as the Arduino enters the sensor reading loop, if there is a problem (a bit like a try catch) then the red LED illuminates.  This way at a glance you can see if all is well or there is a problem.  To debug an issue such as a failed sensor the best way it to plug it into the computer and enable the serial.println debug command to see what the root cause is.




The Weather Station in action!

The source code ...

Good luck and here is the source code for the Weather Station project which you are free to download and use.

Any problems or a thank you leave a comment!

Friday, 10 April 2015


Arduino Uno - Range Finder,  RF transmitter and receiver

The radio frequency in this project is 315 Mhz


Overview


The range finder sends out a sonic (ping) which bounces off a solid object, it then compares the time taken for the (echo) to be received.  All this hard work is taken care of by the HC SR04 ultrasonic module.  The transmitting Arduino reads the sensor every two seconds, then uses the VirtualWire library to encode the centimetre value and RF transmitter to broadcast the data at 315 Mhz.

The RF receiver on the second Aurdino is listening on the 315 Mhz channel and picks up the transmission which it then decodes and displays on the Liquid Crystal Display.

The 'clicking' on the video is RF interference from the transmitter!

Shopping list hardware

  • 2 * Arduino Uno
  • 1 * HC SR04 ultrasonic module (RF Transmitter and RF Receiver)
  • 1 * LCD
  • 2 * 10k ohm resisters


Aurdino software and libraries

  • Arduino Development Environment I'm using v1.6.3
  • VirtualWire to control the HC SR04 ultrasonic module
  • Liquid Crystal (this will already be in your Arduino Library folder)

Receiver circuit

See the official Arduino wiring diagram for the LCD, the only difference is that pin 8 is used instead of pin 11 (as 11 is needed for the receiver).  For more information on the HC SR04 see this very good overview page.

HC SR04 wiring
VCC to 5v
GND to Ground
Data to pin 11


Transmitter circuit




For the transmitter Audrino

#define trigPin 9
#define echoPin 8

#include <VirtualWire.h>

char strDistance[6];

void setup()
{
      // Initialize the IO and ISR
      vw_setup(2000); // Bits per sec
      
      Serial.begin (9600);
      pinMode(trigPin, OUTPUT);
      pinMode(echoPin, INPUT);
      
      send("Meter is ready");
}

void loop()

      long duration, distance;
    
      digitalWrite(trigPin, LOW); 
      delayMicroseconds(2); 
      digitalWrite(trigPin, HIGH);
      delayMicroseconds(10);
      digitalWrite(trigPin, LOW);
      
      duration = pulseIn(echoPin, HIGH);
      distance = (duration/2) / 29.1;
      
      if (distance >= 200 || distance <= 0){
        Serial.println("Out of range");
        send("Out of range");
      } else {
        Serial.print(distance);
        Serial.println(" cm");
        
        sprintf(strDistance, "%i cm", distance);
        send(strDistance);
      }
      delay(2000);
}

void send (char *message)
{
      vw_send((uint8_t *)message, strlen(message));
      vw_wait_tx(); // Wait until the whole message is gone
}



Sunday, 5 April 2015

Arduino Uno, Laser, Morse Code, LCD and Humidity / Temperature Sensor


Simple Laser Communication Project


Overview

The temperature and humidity sensor sends its output to the first Ardunio Uno who creates the text which is displayed on the LCD.  The text is converted to Morse Code on the first Arduino and transmitted to the second Arduino using a red laser.

The second Arduino receives the laser light via a photosensitive resistor and decodes the received Morse Code into ASCII which it then sends both the original Morse Code and ASCII to the LCD.


  


Shopping list

Although this project looks complicated it's simply a number of smaller projects coupled together, if you're new to the Arduino take a look at 'Learning the basics'.


Hardware

  • 2 * Ardunio Uno
  • 1 * Laser Diode (Keyes KY008)
  • 1 * Photosensitive Resistor (Keyes KY018)
  • 1 * Temperature and Humidity Sensor (Keyes KY015)
  • 1 * Piezo buzzer (optional)
  • 1 * LCD 
  • 2 * 10K ohm resistors
  • 1 * 50 ohm resistor
  • 1 * 100 ohm resistor




Aurdino software and libraries



Receiver Circuit


Most of this circuit is devoted to wiring up the LCD.


















Transmitter Circuit

As a reminder the Arduino on the left in the video is responsible for reading the temperature sensor, created the text for the LCD, generating the Morse Code and transmitting through the laser.


For the Transmitting Arduino...

// Usage: morse( <pin number>, <speed WPM>, <1=beep, 0=PTT> )
//        sendmsg( "<text-to-send>" )

#include <Morse.h>
#include <dht11.h>
#define DHT11PIN 2

// Use pin 13 (built-in LED of Arduino 2009)
Morse morse(13, 8, 0);
dht11 DHT11;
String strMessage = "";

void setup()
{
    //Serial.begin(9600);
}

void loop()
{
      int chk = DHT11.read(DHT11PIN);

      Serial.print("Read sensor: ");
      switch (chk)
      {
        case 0: Serial.println("OK"); break;
        case -1: Serial.println("Checksum error"); break;
        case -2: Serial.println("Time out error"); break;
        default: Serial.println("Unknown error"); break;
      }

      // Use a . for a space as Morse Code does not have spaced!     
      strMessage = ".HUMIDITY." + String(DHT11.humidity) + ".PERCENT...TEMP.." + String(DHT11.temperature) + ".CELSIUS..";
  
      //Serial.print(strMessage);

      int intLength = strMessage.length() + 1;
      char charMessage[intLength];
      strMessage.toCharArray(charMessage, intLength);

      morse.sendmsg(charMessage);
     
      delay(30000);
}

Saturday, 11 January 2014

Now with Text-to-Speech and Speech Recognition

Adding text-to-speech was equally painless - I used the System.Speech.Synthesis library and added the below lines of code to wherever speech was required.  Now as you say a command it is repeated back via text-to-speech and displays it in the journal.

            // Initialize a new instance of the SpeechSynthesizer.
            SpeechSynthesizer synth = new SpeechSynthesizer();

            // Configure the audio output.
            synth.SetOutputToDefaultAudioDevice();

            // Speak the recognised text passed to the recogniser event.
            synth.Speak(e.Result.Text);


About these videos:

To watch them in synchronisation start Arm Cam at 9 seconds and Computer Cam at 3 seconds.

Arm Cam: Speech controlled robotic arm with text-to-speech feedback

Computer Cam: Speech controlled robotic arm with text-to-speech feedback (no sound)

To help the speech recognition out I was toggling the mic between mute/un-mute and accidentally cut off part of the "wrist up"command, so it thought I said wrist down, quickly toggling the mic on again it picked up part of the text-to-speech repeating the command - this is why you hear "I have no idea what you said" which is triggered on the recognizer.SpeechRecognitionRejected event.  See the blooper at at 1:05 (Arm Cam).

Some links for further reading:
If you would like a copy of the full C# source code for my application let me know.
(Just in case you were wondering why the pink desk, its running on my daughters laptop!)

Thursday, 9 January 2014

Speech recognition for the robotic arm

While waiting for the step motor to arrive I had the idea of adding speech recognition to the C# project.  It was incredibly simple using the in-build Speech Recognition capabilities of Windows.

In summary;
  1. Create a grammar with the phrased command (utterances) such as "light on", "shoulder down"
  2. Use the speech recogniser to pick up the spoken commands
  3. Parse the plain text result and use a simple Switch statement to run the relevant arm command
  4. It really was that simple!

Screenshot of the updated GUI which now accepts voice commands


A short video of the arm moving via voice commands


Some snippets of the code:

 Creating the grammar and running the recogniser



Parsing the recogniser results and calling the corresponding arm movement function

Sunday, 5 January 2014

If at first you don't succeed, use YouTube and eBay!

There looks to be a feasible solution to the control precision issue which involves replacing the 5 DC motors with step motors and using a Raspberry Pi (which just happens to have been sitting in my cupboard since the summer) to control them.


Step motor and Pi (Raspberry)


The theory at least is to replace the USB I/O PCB which came with the arm and use the Raspberry Pi instead to translate the movement commands pushed to it from the C# app on the PC.  Michael Horne has created a project to control four step motors, albeit without PC input, in his blog.


Controlling the step motors individually (C)Michael Horne


Before getting too excited though I need to check I can actually get a step motor to fit within the robotic arm, they're currently £5.50 on eBay so it's worth a punt.

Wednesday, 1 January 2014

Happy New Year!

To start the New Year off with a bang I've discovered a fundamental problem with the Robotic Arm, but more about that in a moment.

The core components 'in a nutshell'


So far I have addressed each of the four core components of this project individually and started to pull together a encompassing C# program to get them to working together.  It was at this point I started to put the arm through it's paces with one of the kids Christmas presents - Marble Madness.

The issue I discovered is despite sending an identical set of commands to the arm the final position was not the same place each time, in fact some axis where hugely off!  The PC I/O is the same each time as verified when using the PC application which came with the arm, it's the actual motors themselves.

Now for the fundamental oversight; the DC motors are of course affected by the weight of the arm, battery condition and arm position i.e., asking a motor to spin for 1 second could result in 10 degrees movement in one instance and 20 degrees movement in another.


The view from the arms claw


The view of the whole arm

Running the tests in the above videos several times resulted in the arm completely missing the marble drop position.

Noooo, now what?!   Maybe I need to take a look at some servos or a step motor to increase accuracy.

Sunday, 1 December 2013

Controlling the robot arm with C#...

This video is rather busy so I'll explain what's going on.

The C# application (called 'Project AM.E') is the main application that will contain all the functionality to start the webcam, analyse the picture for colour bricks, AI to sort the bricks and control the robot arm.  The Microsoft LifeCam window is only there so you can see the robot arm moving.  At the moment the application is just displaying dummy data and moving the arm as a first test...

  • As the application loads it switches on the webcam (so the robot can see!)
  • I press the 'start button' which currently just simulates analysing the picture
  • As the robot arm starts to move I bring the Microsoft LifeCam window back into view so you can see it moving
  • Notice the log at the bottom of the application screen which is showing what's going on throughout


First test controlling the arm

Saturday, 30 November 2013

First draft of the GUI ...

The basic elements are there, the webcam view, the interpreted brick colours and on the right will be the sorted bricks.

I've added a listbox to display the debug information which looks rather good while it is working.

Next is to start controlling the robot arm and coding all of the functionality together.

First draft of the Graphical User Interface v0.1

Friday, 29 November 2013

No disassemble Number 5!

Anyone remember Short Circuit?

Yes, well that shows your age....anyway, one tip when building the robot arm, empty each packet for screws one-by-one, count them (which is the only way to recognise the subtle differences between them), then label them with their P number.

 Page 5 of 28


I think my cunning plan might have a hole it in, I'm not sure the pincers will open wide enough to pick up the webcam - so instead I might have to mount it on the yellow plastic shroud at the top....


And... finally page 28 of 28!



Thursday, 28 November 2013

The Robot Arm's here ...

I best get some Super Glue and a large hammer!



ROBOTIC ARM USB!

Errr, oh oh, look at all these tiny bits

Ikea has nothing on this - more beer needed!

Wednesday, 27 November 2013

Creating 'flight paths' for the robot arm...

To keep the logic simple when controlling the robot arm I've decided to create the concept of 'flight paths' as in predefined routes the arm will take rather than having a different route for each sort action.

So to move the unsorted green brick in position 0 into its sorted position 3 I would need to call the following predefined 'flight paths':

A - to move the arm to the default unsorted position
B.n - where n is the unsorted bricks position, 0 to 8
C.n - where n is the unsorted bricks return to default unsorted position
D - to move the arm to the default sorted position
E.n - where n is the sorted bricks position, 0 to 8
F.n - where n is the sorted bricks return to default sorted position
G - to move the arm to the default unsorted position
Either sort another brick or H - to move back to the arm start position

I could optimise the last brick sort step and move the arm directly from default sorted position to the arm start position, i.e., missing out G and H for the last sort and call it step I.

I'll also need a set of flight paths to pick up the web camera, take the picture and return the web camera before stating off sorting the blocks.  More code!!!

Robot arm 'flight paths'

Sunday, 24 November 2013

Getting the webcam working ...

This was a lot more complicated than I first thought, there is example code on the Internet to capture images from webcams but not all of it works on Windows 8.1 64 bit with Microsoft Visual Studio Express 2013 Desktop, C#.

To cut a long story short I found that you need to use Windows Image Acquisition (WIA), which uses the Windows Driver Model (WDM) architecture.

Again this is just another piece of test code which will need stitching into a main application but what is does do in only a few lines of code is:
  1. Switches on the webcam so you can see the live video feed from the webcam
  2. Enables you to take a picture and automatically save it as a JPEG on your hard drive
  3. Switches off the webcam 

Webcam video and image capture


Using the example code that Samuel dos Anjos has written (link below) I simply added a new button 'Take Picture' to the Form and added a takePic function to the userControl class.

Calling the new Save Image functionality


The video feed is displayed via a PictureBox (ImgWebCam) control in Samuel's example so all I needed to do is use that control to save its current image as a JPEG.

Saving the current image as a JPEG


The earlier code can then pick up that JPEG and start analysing it.  The fact that you can see the live webcam video feed as the Robot arm is moving the webcam to take the picture will I'm sure look pretty cool!


Some useful example code from Samuel dos Anjos:

Colour detection code...

Fantastic!  The original code was able to find 'blue' within the picture, so the next step is to search a specific region (2) only.  I think my detection will essentially check for red, green, grey, yellow and brown and depending on which colour it finds it will report that colour.

Search for the colour blue within the entire picture


This is the code which currently scans through the entire picture, so in theory changing the for loop to region 2's coordinates should do the job.

Original code

It seems to work, I tested it with another image that does not have blue at those coordinates and it didn't detect blue.

Modified code to test

The next and more time consuming activity is to write code to scan each region and identify the colour within it.
Testcard and region masking ...

The colour detection looks like it will be quite tricky so lets tackle it first.

I've created a 'testcard' photo using the actual webcam suspended over the Lego bricks, I'll get the Robot arm to pick it up and hold it when the picture is taken to again try and get a fixed point of reference.  The natural / artificial light is going to be a problem however I think there is a light on the Robots arm so I'll use that to keep the light consistent.


Photo of the Lego bricks from the webcam 1280 x 960


Hmm even at maximum resolution with lots of light the picture quality is not great.

Next I had a go creating a 'region mask' over the top of the picture so that I can start to look at specific pixel areas where each Lego brick is, from 0 in the top left to 8 in the bottom right.

Masking out the regions on the picture where the colour detection needs to analyse


Using Google again I've found some example code that analyses a bitmap picture and detects if a selected colour is present.  What I'll do is try and modify it so that rather than searching the entire picture it searches specific regions i.e., 0 through to 8 to see what colour brick is present.

Calculating region 2's coordinates using Microsoft Paint


I'm going to use some example C# code by Hemant Srivastava which detects a colour within an entire image, it managed to find the blue colour so lets have a go detecting the colour in region number 2 by modifying the code to only scan within region 2's coordinates.

See this link for Hemant's example code: 




Google and the Internet are your best friends...

While I'm waiting for the Robot Arm to arrive in the post I've been looking around for inspiration for the C# code I'll need to control its movement and I've come across a really great piece of ingenuity from Matthew Dally who has captured the USB signals sent to the Robot Arm from the supplied Windows application. He's taken the codes and written a handy wrapper in C#.

Take a look...

http://matthew-dally.blogspot.co.uk/2012/08/c-5-axis-robot-arm-code.html

 (C) Matthew Dally - C# 5-axis Robot Arm Code
Defining some constants...

OK so, this project could get complicated very quickly if I try to handle every eventuality, e.g., bricks in different places, unlimited colours, different shapes etc.

So the golden rule of this story is KEEP IT SIMPLE

Fixed points of reference.....

This feels like the best place to start- on the top left of the picture are the 'slots' where the Lego bricks will be placed.  To the right are the locations where the sorted bricks will be placed.  I think I'll go with a maximum of three colour piles as I don't want to break my new golden rule only 30 seconds after I created it!

The ARM position is where the robot will sit.

Creating some fixed points of reference - to help me and my robot out


The start state with the Lego bricks in their positions ready to be sorted


And now the sorted bricks - just the middle step to go then!

Getting started with the idea...

My intention is to:

1. Use the webcam to look at a 3x3 grid of randomly coloured Lego bricks, the position of the grid will always be the same to help keep the robot arm movement simple.

2. Identify the colour of each Lego brick using the webcam; I'm thinking about using the webcam to take a bitmap picture and look at each bricks location in tern to determine its colour.

3. Determine which colour pile that brick should be put into using C#

4. Use the robot arm to pick up each block and put it into the correct coloured pile

The component parts...

USB Robotic Arm Kit from Maplin 


Microsoft LifeCam VX-3000


Microsoft Visual Studio Express 2013




Getting started

My Robot Project!

Hello!  My goal is to have fun creating a project to sort Lego bricks into piles of different colours using a robot arm, a webcam and C#.  I haven't written any code in the best part of 10 years and have never written anything that controls peripherals so it might be quite challenging.  That's the fun bit!

I've decided to keep a blog which will be a useful reference for me looking back over the project to see where improvements could be made and also if anyone else wants to have a go.