A Touch of Light

A step towards enlightenment for those living in the world of darkness.

A Young India Fellowship Initiative in collaboration with University of Pennsylvania


      





Tuesday, December 6, 2011

Autorun

Although we finished the first phase of viSparsh. There were certain fine tunings required to make the belt wearable. The belt stills needed to be started by executing a file on ubuntu-terminal. To resolve that we needed an autorun file which automatically runs at startup and kickstarts the belt. This can be done by creating an initialization daemon. We have created this daemon and now the belt can run on its own. The detailed process is given below- 


STEP1: Create the autorun script
Create a new file 'auto_visparsh' at Desktop with the commands starting the viSparsh belt as content. Copy following command in this file. 

COMMAND:
cd /root/OpenNI/Platform/Linux-x86/Bin/Release
./SimpleRead.net.exe


STEP2: Copy this autorun file to /etc/init.d 
/etc/init.d contains all scripts which are run on startup and shutdown automatically. The '.d' in 'init.d' represents 'daemon'. Daemon are the processes which are executed in background.

COMMAND:
cd /etc/init.d
cp /home/visparsh/Desktop/auto_visparsh /etc/init.d


STEP3: Choose the run-level
Create a 'soft-link' of the autorun file in one of the run-level directory {rc0.d, rc1.d rc2.d rc3.d rc4.d, rc5.d, rc6.d}. For our purpose choose 'rc2.d'. Don't use 'rc0','rc1' and 'rc6' they contain shutdown scripts. We can also set priority of the process using 'SXX' where XX represents the priority {00<=XX<=99}. The letter 'S' represents 'Start' and can be replaced by 'K' to deactivate a script.   

COMMAND:
cd /etc/rc2.d
ln -s ../init.d/auto_visparsh S99auto_visparsh

STEP4: Restart
Now restart the board. This time belt will start automatically.

Tuesday, November 29, 2011

Test Drive-1: GET, SET……… GO!!!!!!!!!!


After the hard work of one and a half month finally we finished first phase of our project. We have designed the viSparsh v1.0. The belt includes a plastic box which contains PandaBoard, Mbed and Voltage Convertor and the Kinect fixed at the front of the belt. Currently we are using three vibration motors on three sides which we plan to increase to six in version 1.1.

We had our test drive today in our hostel and it worked exceptionally well with no false alarm. Some of the other fellows helped us in shooting the video and becoming the obstacles. Here is the video for this test drive-






Power Supply Issues

After powering the circuit from AC mains, it was time to switch on to batteries as the viSparsh belt would be mobile. So, we purchased a Chinese 12V 2100mAh battery pack to run Kinect and used 7805 voltage regulator to power mbed and PandaBoard with 5V. The batteries were drained out in a couple of minutes and situation got worst after recharging it. Hence, this Chinese battery proved useless for us.
We, then, used 8 AA size Kodak 2100 mAh rechargeable cells. Though it powered our setup for 15 odd minutes but not what we required. Then we got 8 more to power Kinect separately. Still, the power dissipation from 7805 was huge and we thought why not to eliminate that. As one cell is rated 1.2V (though it measures between 1.3-1.4), so we connect 4 cells in series to draw slightly more than 5V from it.
Here is the final number of cells we have used in the project:

1. Mbed:                  4 AA Size 1.2V 1000mAh          Output: Slightly more than 5V
2. PandaBoard:        4 AA Size 1.2V 2100mAh          Output: Slightly more than 5V
3. Kinect                  8 AA Size 1.2V 2500mAh          Output: Slightly more than 10V

Here, we were able to eliminate 7805 voltage regulator from our circuit and hence we minimized the power dissipation.We tested the belt for around half an hour and it worked perfectly with this configuration.
We may add 4 more cells in parallel to power PandaBoard.

Another option we may consider is to order a customized battery with our specifications.

Booting Ubuntu on Pandaboard EA3

STEP1: Download the Ubuntu image
Use this link to download the pre-installed image of ubuntu 10.10. 

STEP2: Prepare the SD card
Insert the SD card in your PC and unmount it. Then use this command to copy the pre-built image on the card.

COMMAND:
sudo sh -c 'zcat ./ubuntu-netbook-10.10-preinstalled-netbook-armel+omap4.img.gz|dd bs=4M of=/dev/sdb; sync'

STEP3: Download and copy required u-boot and MLO files
For Pandaboard version A2 and later follow these instructions. In our case we had a EA3 version so we followed it.  

2.      Untar with "tar -jxf panda.tar.bz2"
3.      Mount the first partition of the imaged SD card
4.      Copy MLO and u-boot.bin (extracted from the tar file) to the mounted partition.

STEP4: Boot from Pandaboard
Now unmounts the SD card and insert it in Pandaboard and give it the power. It shall start booting.

STEP5: Now use the same procedure to install OpenNI and Kinect that we used for BeagleBoard.



Saturday, November 26, 2011

Getting ready to release the 1st version



video

Huhuuu! After facing lots of troubles at every step, we were finally able to get the project working by seamlessly integrating all the components. The video above shows as to how Kinect is able to detect an object (human in our case) and give an alarm depending on the distance between the human and Kinect. In our case, we have used three vibration motors for producing these alert signals. Each motor is used to send an alert for obstacle in one particular direction namely, left, right or middle. These three motors would help a visually impaired person in identifying the direction in which the obstacle is present. We hope to transfer this functionality on the belt in a day or two.This would mark the end of first phase of our project where we were supposed to make a working prototype of the viSparsh Belt.

Wednesday, November 23, 2011

Solving the Response Time Problem


The response time of the data transmitted by the Kinect to the BeagleBoard came to around 4-5 seconds which was quite contrary to our expectations. By 'Response Time' we mean the 'time lag' between two samples of data transmitted by the Kinect to the BeagleBoard.

In order to solve this problem we tried running the same on the PandaBoard and Hurray! the problem got solved. Response time decreased to a few milliseconds. Though both have the 1Ghz processor but BeagleBoard has 512MB DDR RAM whereas PandaBoard has 1GB DDR2 RAM.

Active USB Hub: The way out for our Kinect Detection Problem



To our utter surprise, Kinect stopped working all of a sudden. On running Kinect sample programs, it was not able to detect the device as mentioned in one of our previous posts. However few days back, we were able to find the solution for this. After trying all possible combinations like the software error possibilities etc., we were finally left with trying out the Active USB Hub option. And Hurray!, it actually worked. The problem was that although Kinect runs on a 12V adapter , it still draws ample amount of current from the USB port to which it is connected. Thus, while connecting Kinect to BeagleBoard/PandaBoard, Kinect was not able to draw to draw sufficient amount of current from the board. Thus, on connecting Kinect through the Active USB Hub instead of the board directly, we were able to remove Kinect detection problem.

Saturday, November 12, 2011

Time for Efficiency Optimization


YIF journey has been a life-changer for each of the Young India Fellows. Especially for our team viSparsh, managing a technical project alongside an intensive liberal arts course has been tough. We have been working many days and nights to quickly complete the 1st phase of our ELM (to build viSparsh belt prototype). However, the demanding coursework and other activities at YIF also consumed effective part of our time. Thus, we finally understood the urgency for effective time management for our project which we were not able to strictly abide to earlier. As per the guidance of our mentor Prof. Rahul Mangharam, to start with we decided of chalking out a plan for our coming two weeks. Given below is the work allocation for our next two weeks:
– Resolving Issues coming with Kinect on Beagleboard(Tushar- 16/11/11)
– Completing all installation work( including OS and Kinect) on PandaBoard (Jatin-17/11/11)
– Talking to organizations like Kritical Solutions and Saksham NGO (suggested by YIFP admin) to get inputs from visually impaired people, knowing about their issues and talking to other people already working in this field. (Rolly-30/11/11)
– Making PCB for the project (Jatin-22/11/11)
– Integrating all components (Tushar-26/11/11)
– Making Initial Project with Belt (Tushar-26/11/11)
– Updating Blog, contacting people and other communications (Rolly- on regular basis)
Alongside, this initial work distribution we would be soon making a proper timeline till the end of our project to help us keep better track of the project and work accordingly to meet the deadlines.

Problem Bubbles Popping Out


It seems we are on the board of ‘Snakes & Ladders’. The time when we were about to complete our phase-I another technical issue has bugged us. The Kinect which was working well so far (as posted earlier) is not getting detected with beagleboard now. 
We are getting the follwing error on running a sample Kinect program-
Error:One or more of the following nodes could not be enumerated.
Device: PrimeSense/SensorKinect/5.0.3.4: The device is not connected!
For the last four days we have been struggling with this problem. We have tried various fixes available on the web but none has worked so far. We have even prepared a new SD card with everything installed from scratch but same error persists. We are trying our level best to find any point we have been unknowingly missing so far. However, this had resulted us in push our submission deadlines also accounting the fact that we have term papers and exams on the pipeline.

Monday, November 7, 2011

Battle with Bugs


The entire last was a battle with bugs and installation problems. The OpenNI website has updated the OpenNI files without updating the Sensor files. It created errors during installation and we were forced to read almost every blog and website on web to debug it. We left no stone unturned and finally managed to install the sensor correctly. Here is a detailed description on installation instructions and debugging. We are trying to put all the problems and there solutions at one place so that it can save others time.

STEP 1: Install required packages for Kinect
We need to install certain packages in Ubuntu. These packages, then, would facilitate the installation of OpenNI and Kinect.

COMMAND:
sudo apt-get install git-core cmake libglut3-dev pkg-config 
sudo apt-get install gcc g++ build-essential libxmu-dev 
sudo apt-get install libxi-dev libusb-1.0-0-dev 
sudo apt-get install doxygen graphviz git

ERROR:
libglut3-dev cannot be installed

FIX:
sudo apt-get install git-core cmake freeglut3-dev pkg-config
sudo apt-get install gcc g++ build-essential libxmu-dev 
sudo apt-get install libxi-dev libusb-1.0-0-dev 
sudo apt-get install doxygen graphviz git
In the latest versions ‘libglut3-dev’ has been replaced by ‘freeglut3-dev’ so install it inplace of ‘libglut3-dev’. In case it does not work, install libglut3 from launchpad.net/ubuntu/natty/+package/libglut3-dev.

Although some blogs tell that ‘doxygen’ and ‘graphviz’ are optional, we found that they are required so don’t skip them while installation.

STEP 2: Create a new directory for Kinect

COMMAND:
mkdir ~/kinect
cd ~/kinect

STEP 3: Download OpenNI from the git repository

COMMAND:
git clone https://github.com/OpenNI/OpenNI.git


STEP 4: Install OpenNI

COMMAND:
cd OpenNI/Platform/Linux-x86/Build
make && sudo make install

ERROR1:
Cannot find the metadata file "system.windows.forms.dll"

FIX1:
sudo apt-get install mono-complete
Although, it is weird that linux throws error of windows dll file, the solution is to install ‘mono’. Mono is a platform for running and developing applications based on the ECMA/ISO Standards.

ERROR2:
No access permission for install.sh and RedistMaker.

FIX2:
The error indicates that the install.sh and RedistMaker files do not have execution permission. Therefore give them permission to execute.
cd ../CreateRedist
sudo chmod +x install.sh RedistMaker
cd ../Build

ERROR3:
CommonMakefile does not exit

FIX3:
sudo apt-get install mono-complete

ERROR 4:
“arm-angstrom-linux-gnueabi” does not exist.
The error is generated as we are on ARM Platform and the Platform.Arm file tries to access gnueabi file for Angstrom (arm-angstrom-linux-gnueabi) which does not exist in Ubuntu.

FIX 4:
cd Common 
mv Platform.Arm Platform.Arm.BAK
cp Platform.x86 Platform.Arm

ERROR 5:
Unrecognized command line option "-malign-double"
Unrecognized command line option "-mmse2"

FIX 5: Edit the Platform.Arm file in ~/kinect/OpenNI/Platform/Linux-x86/Build/Common

In the latest version of OpenNI, the ARM platform has been included by the name Linux-ARM. However if we go into ~/kinect/OpenNI/Platform/Linux-ARM/Build/ and try to build it you’ll face errors thus better would be to use the old Linux-x86 platform files with some modifications.

Open ~/kinect/OpenNI/Platform/Linux-x86/Build/Common/Platform.Arm and ~/kinect/OpenNI/Platform/Linux-x86/Build/Common/Platform.x86 files to comment out these lines:

  CFLAGS += -malign-double
  and
  ifeq ($(SSE_GENERATION), 2)
        CFLAGS += -msse2
  else
        ifeq ($(SSE_GENERATION), 3)
               CFLAGS += -msse3
        else
               ($error "Only SSE2 and SSE3 are supported")
        endif
  endif

We need to do this because ‘ -malign-double’ and ‘-msse3’ flags are valid only for x86 platform and will not work for gcc on ARM. Commenting out these statements makes the code usable for ARM.


STEP 5: Download Kinect driver

Most of the blogs mention https://github.com/boilerbots/Sensor.git to get the Kinect sensor’s files. These files are not updated and generated many errors and bugs. Use https://github.com/avin2/SensorKinect which is the updated version and do not produce these errors.

COMMAND: 
cd ~/kinect/
git clone https://github.com/avin2/SensorKinect
cd SensorKinect


STEP 6: Install Kinect driver
cd Platform/Linux-x86/Build
make && sudo make install

ERROR1:
CommonMakefile does not exit

FIX1:
sudo apt-get install mono-complete
Now try building again. If the method does not work do this-
gedit ~/kinect/SensorKinect/Platform/Linux-x86/Build
Replace ‘LIB_USED’ by ‘USED_LIBS’ and build again. You may find that the name is already ‘USED_LIBS’ in that case follow the below given instructions.

Latest versions of OpenNI have changed the name of the common file as CommonCppMakefile. How most of the Sensor files have not yet been updated. Therefore, execute this command to create a soft link copy of CommonCppMakefile with the name CommonMakefile.
cd /usr/include/ni
ln –s  ./CommonCppMakefile ./CommonMakefile

If it does not work open /usr/include/ni and search for these file-
CommonCppMakefile
CommonDefs.mak
CommonTargets.mak
CommonCSMakefile
CommonJavaMakefile
Platform.CE4100
Platform.x86
Platform.Arm
The OpenNI probably hasn’t created these files. Therefore, you need to copy them here. Execute these commands-
cd ~/kinect/OpenNI/Platform/Linux-x86/Build
cp Common/* /usr/include/ni

STEP 7: Now use your kinect
Connect the kinect with the beagleboard and run sample programs-
cd ~/kinect/OpenNI/Platform/Linux-x86/Bin/Release
./SampleNiRead


ERROR :
InitFromXml failed: Failed to set USB Interface!

FIX :
sudo rmmod gspca_kinect


    

Bootstrap


Here are the instructions to download an image of Ubuntu Maverick and installing it on SD/MMC card. You may choose any Ubuntu flavor as you wish. We have chosen Maverick 10.10 here. Download any other flavor of Ubuntu from here: http://rcn-ee.net/deb/rootfs/maverick/.

STEP 1: Download and extract Ubuntu Maverick 10.10
wget http://rcn-ee.net/deb/rootfs/maverick/ubuntu-10.10-r7-minimal-armel.tar.xz
 
tar xjf ubuntu-10.10-r7-minimal-armel.tar.xz
cd ubuntu-10.10-r7-minimal-armel

STEP 2: Look for the MMC/SD card
sudo ./setup_sdcard.sh --probe-mmc

This would result in something like this-
Are you sure? I Don't see [/dev/idontknow], here is what I do see...
 
fdisk -l:
Disk /dev/sda: 500.1 GB, 500107862016 bytes <-x86 Root Drive
Disk /dev/sdb: 7957 MB, 7957325824 bytes    <-MMC/SD card
 
mount:
/dev/sda1 on / type ext4 (rw,errors=remount-ro,commit=0) <-x86 Root Partition
 Here you can see that /dev/sdb matches your MMC card specification.

STEP 3: Partition the MMC/SD card and install the Ubuntu image
sudo ./setup_sdcard.sh --mmc /dev/sdb --uboot beagle

Version
--uboot option
BeagleBoard Bx
--uboot beagle_bx
BeagleBoard Cx, xM, A/B/C
--uboot beagle
Panda Board
--uboot panda

STEP 4: Insert the MMC/SD card in SD card slot at BeagleBoard, connect the monitor and boot it. Use these login details:
Username: ubuntu 
Password: temppwd

STEP 5: Connect to Internet and install GUI and other necessary packages
sudo dhclient eth0
sudo apt-get update
sudo apt-get install xubuntu-desktop
sudo apt-get install xfce4 gdm xubuntu-gdm-theme xubuntu-artwork 
sudo apt-get install xserver-xorg-video-omap3 network-manager
We also used ‘sudo apt-get upgrade’ in this step 5. It took several hours and resulted into non graphical interface. Therefore, there is no need to do that here. 

STEP 6: Restart the BeagleBoard.


Friday, November 4, 2011

PandaBoard- All thanks to Texas Instruments

Yes!!! Now we have with us the power of PandaBoard and all thanks to Texas Instruments for providing us the board for our project and guess what..it's sponsored by them.

After half an hour long conference with Vikas Joshi representing TI, he sent to us PandaBoard which will give a special edge to our project. We have already installed Ubuntu OS image on it and we are heading towards our next step to complete Phase I of the project.

Thank You Texas Instruments!!!

Kinectised the BeagleBoard

Finally, we have "Kinectised" the beagleboard. After working for more than a week to install OpenNI and Kinect sensor on the beagleboard things have finally worked out for us.

OpenNI though initially  showed some errors while installing but it was done in a couple of days.
Then we tried the methods given on N number of sites on the internet, still we were not able to install Kinect Sensor. Ultimately, we had to use our brain and employ some tricks to get that working. For this we understood it first by installing it on Linux installed on our laptop and then going through the files and the functions handled by them. Then we moved on to beagle board to resolve the error. Detailed process will be posted after we complete the next phase.


So, now the game is ON and we are just waiting for the delivery of Logic level converters and vibrations motors which we have ordered from Sparkfun. Stay tuned for the next updates!!



Tuesday, October 25, 2011

A new ray of hope amidst Diwali festivities


Huh! After days of searching for the PandaBoard, we finally have found a ray of hope. As mentioned in the previous post we had planned to buy PandaBoard for the project. However, non-availability of it in the market gave us a setback. In order to meet the deadlines we had to start our project with BeagleBoard till the time we found a PandaBoard for our project. We were sure that the unmatched feature set and ease of usage of the PandaBoard would help us in developing a good product quickly. Due to these reasons we kept searching for all the possibilities of getting a chance to work on the PandaBoard. Hurray! We finally have got a new hope from Texas Instruments who have given a positive signal for lending a PandaBoard to us atleast till the end of our ELM. We are excited to try our hands on the same. We wish Diwali brings good news from across the corners for you all too. Happy Diwali Guys and Gals :-) .

Sunday, October 23, 2011

Serially the way out


In order to transmit the data from BeagleBoard to Mbed microcontroller to actuate the vibration motors we need to transmit data serially. Hence we successfully tested serially full duplex communication from Linux (x86 processor) to other PC terminal. Next target is running the Kinect and OpenNi on Ubuntu (we have already succesfully checked the same on Windows7).

Down to the basics


Next, we succesfully tested serial transmission and reception from mbed(through 9th & 10th serial port pins). 

For testing data transmission, we made use of ebox (as we didn't had PC) that was already available with us. To convert the voltages( ttl to RS-232), we had to make use of MAX 232. The circuit looked as follows:
It must be noted that the PCB shown in the diagram on the right is used only for the usage of MAX232 IC in that circuit(since we didn't have spare IC).

Parallel Action on mbed



While other stuff was being done, we registered mbed online and started exploring it using its online available compiler. We tried simple LED programs and PWM and they worked without any problem.

Getting to the technicalities



As mentioned in the previous post, we faced certain issues with Ubuntu. We had different concerns for Natty and Maverick. In case of Natty, our main concern was the extra amount of space being taken up by the OS that we had to eventually buy another 8GB SD Card. Thus, as we figured out the main problem with installing Natty is the space constraint.
In case of Maverick, even after strictly following the steps given in the wiki link:https://wiki.ubuntu.com/ARM/OMAPMaverickInstall



We were not able to successfully install it. We were following the installing procedures for BeagleBoard xM ver A3 and later. However, later we discovered that although the procedure for all other versions of BeagleBoard xM after A3 is same, ver C is an expection to it. We figured out that since our BeagleBoard xM was Rev C, we had to adopt different procedure instead of the normal procedure (also mentioned in above link) as an exception.

Time to explore :-)


Since we had worked on Windows CE7 before, we thought of exploring other Operating Systems. Spending some time on this process was necessary as we want to test the pros and cons of various OS before choosing one.

We tried installing and booting the OS on BeagleBoard for testing purposes.


1. Angstorm: : We started with Angstorm with worked perfectly fine without creating any issues.


2. Ubuntu Flavors: Next we applied the same procedure of testing the image on Natty and Maverick separately. We came up with certain difficulties in this (discussed in the next post). However, we were able to resolve them soon.

3. Android: Considering the Android revolution going on at present with high graphic capabilities, we thought it also try our hands on it. This would help us later in integrating our product with State of the Art application, features.

Thus, we were finally able to successfully test install all main OS on BeagleBoard. During this process we learned that for an OS to successfully boot this process is needed-

1. Partitioning the SD card into two parts-

             SD card            : /dev/sdb  [4 GB]
______________________________________________
     Name                     Path              Space                 Type
______________________________________________
boot partition    |       /dev/sdb1     |    74 MB         |   FAT32
rootfs partition  |       /dev/sdb2     | Rest Space      |   EXT3
______________________________________________

2. Copying appropriate files into these partitions-

__________________________________________________
     Name                     Path                           Files
__________________________________________________
boot partition    |       /dev/sdb1     |   MLO, u-boot.bin.ift, uImage
rootfs partition  |       /dev/sdb2     |   vmlinuz
__________________________________________________


These four files are the main files for BB-xm we need 'boot.scr' too.

What's in the name ...

After considering various options we had earlier come out with 'viSparsh' as the name of our project. This word has been coined by our team. It is a conglomeration of two words namely vision (Vis) & Touch (Sparsh). We felt the name ViSparsh truly depicted the purpose of our project which was to aid the visually impaired people through the sense of Touch. We also thought this as an apt name for our team too whereby the same word would signify our team's aim of touching the lives of several people through our vision. 

Since we have also built a small lab for the first time at YIF we thought of giving it a name too. One of our fellows (Mr. Balaji) helped us in searching a name and we all finally agreed on 'e-Capsule Lab' as the name of our laboratory. The reason for this were manifolds. Firstly, the size of the lab was that of a capsule. Secondly, 'e' would signify electronics in the word 'e-capsule'. Thirdly, we aim to make electronic solutions for the society with the same aim as capsules are made in the medicine field for the betterment of mankind. Thus, we thought this name to be apt for our lab. We hope that future students of YIF would carry forward the name and build great solutions under this lab that would help our society in path-breaking ways.

Thursday, October 13, 2011

The Road Ahead

Some of the other things that we also have started working on are follows:
- We are exploring the options for connecting Internet to the BeagleBoard. Since it doesn't have an inbuilt WiFi, we have decided to go with LAN connection at present. However, since router is placed at some distance, we have to make 20m length cable for our usage.
- For a day or two, we are testing various OS platforms to search the best option suiting our requirements. We have already successfully tested on Angstrom which worked perfectly fine. Our next targets are of Ubuntu Maverick and Android which we hopefully aim to finish soon.
- We have started exploring Kinect to test its various parts. We even did Kinect tear-down in order to gain better understanding of the product.

Crossing Barriers

As was the case with the non-availability of PandaBoard so was with mbed. We were in a crisis situation as one of the main components (mbed) was again out of stock. However, we didn't lose hope and started figuring out other possibilities of getting a mbed to work on. While the search was still on, our teammate Tushar gave us a surprise that we had actually won a free mbed (form was secretly filled by him). We took a sigh of relief as we had crossed another barrier and saved some money :-) . As a back up we had earlier thought of buying 2 mbeds. Luckily we were fortunate enough and after few days mbed was again in stock. Since one of the required two mbeds was already won (:-) ), we thought of placing the order for the 2nd one for our safety.

After we received the mbed yesterday, one of our tasks was to get accustomed to the product as we all three were new to it. We successfully implemented some serial transmission codes through the mbed online compiler.

Slight modification in components

As mentioned in our previous blog, we had initially planned of buying PandaBoard considering the project's future advancements to make use of WiFi/Bluetooth and other upgraded hardware configuration requirements However, things never work as planned :-( and so was the case with us. There was no PandaBoard availability in India before 15th Nov. '11. Our time was passing by swiftly and surely we could not afford to wait so long. In order to keep our deadlines we had to resort to buying 1 BeagleBoard instantly for the initial work while waiting for the PandaBoard's availibility.

Requirement to change the Monitor

We had initially bought Benq 18.5" TFT (LED) Monitor which had only VGA port. However, when Beagleboard arrived we figured out that it would not be a feasible idea to convert HDMI to VGA (Beagleboard-xm had HDMI connectivity while our Benq monitor supported VGA). Fortunately, the vendor agreed to replace the Benq Monitor with Samsung LCD monitor which supported both DVI-D and VGA. In order to use BeagleBoard with this new monitor we simply had to use a HDMI-DVI-D converter. We even checked XBOX360 connectivity with the Monitor and it worked perfectly fine. Thus, finally we found the appropriate monitor for the project.

Parts List

Name of the main components required in the initial phase:
1. Microsoft X-Box 360 4 GB Kinect Bundle
2. PandaBoard
3. mbed - LPC1768 Development Board
4. Vibration Motors (ROB-08449)
5. Logic Level Converter (BOB-08745)

Get Set Go.....

After few weeks of hustle bustle, we were finally able to kick start our project formally on 10th October 2011 when we also began with our 4th term of the program. After the first skype session with our mentor some of our main priorities were figured out to be as follows:

- The first target was to get budget sanction for our project which took about 10 days of time as this was the first time an ELM collaboration was happening between UPenn and YIF. Our budget was finally sanctioned on 23rd Sept. 2011.

- Our second target was to set up a lab at YIF. Since YIFP is in the first year of it's launch, we had space constraint issue. After pondering over various possibilities instead of lingering on, we finally thought to make temporary arrangements for the lab in one of our teammates' hostel room.

- On a parallel track we prepared the part list, did some market survey, figured out the best buy prices and ordered them. As almost all the parts were needed to be imported (required shipping), it again took some of our time in starting the project. Since in between the festive season also began and there was holiday between 28th September- 9th October '11, a further delay happened in obtaining the components. Overall this part also took around 10 days.

- Till the time these components were arriving we went to the local market to buy the miscellaneous items like solder iron, wire, extension board, LAN cable etc required to set up our lab.

- Initially as per our market survey we had decided on buying an ASUS laptop so as to best suit our requirements. However since we had a Samsung laptop (being given to each of the fellows by YIF), for starting purposes we thought of buying Monitor, Keyboard and Mouse separately instead of a new Laptop( with Graphics Card option etc) which costed us less.




Saturday, September 17, 2011

Use Case Scenario

In the typical use case, a visually impaired person would buckle the viSparsh belt around his waist. The belt would, then, guide him to navigate. Whenever an obstacle comes into the path, viSparsh would inform him for the presence of an obstacle through vibration motors. The viSparsh belt would utilize the RGB image and Infrared depth data captured by Kinect to decide the incoming obstacle and its direction. Suppose if there is an obstacle in the right front of viSparsh user the belt would generate vibrations in the right side. The user would move in the opposite direction to avoid the obstacle.


This typical use case scenario is represented below through a figure-




Wednesday, September 14, 2011

First meeting with Prof Mangharam :)


Skype Meeting with Prof. Rahul Mangharam 


12-Sept-2011 
6.30 PM IST (9.30 AM EST)
  •  Prof. Magharam directed us to take our ELM project work at YIF as a challenge in the form of a start-up company. He thereby suggested not only to emphasize on the technical aspects of the project but also to get exposed to aspects such as Market Research, User Experience, Case Studies etc.
  • We were asked to immediately set up a lab at YIF to start our project work. For this we were asked to make and share a part list required for starting this project at the earliest.
  • It was mentioned that after the completion of this initial task we would be required to write a second summary of how this product could be made viable by making it cost effective, working on the slim look, integrating with other solutions etc. Prof. asked us to give importance to the usability aspects of the product also as there is no point making a technological product unless it fulfils the demand of the users.

v

Future Vision & Impact


We have been allotted the time span of four weeks to build this prototype inclusive of simulation and real time testing of the prototype. This would be submitted to Prof. Mangharam by a video demonstration.

Then we would be focusing on making it a more viable productized version. This includes developing customized version depending on the user requirements. For example, developing low cost solution with only necessary features would serve the people living in rural areas. And designing relatively high cost product with various additional features like GPS tracking, education and entertainment related application, etc. for the user who can afford to pay for more advanced version.

We firmly believe that the final product will certainly add a social value which would help the visually impaired to move from the shackles of their dependencies. It would create a sense of freedom in them and give them zeal towards living a happier life. It would further make them believe that too have equal opportunity as a normal person to grow and achieve. Thus, overall this product would create a high impact on the society. 



Sunday, September 11, 2011

Initial Findings

On being given the initial idea, we started to find more about the latest inventions going on these fields. After some study we found that all the big universities across the world have a haptics division which are pursuing research in the domain of our present interest including MIT Media Labs, Stanford etc. Further, since the release of XBOX360 Kinect in November last year, almost all these haptic departments have started projects for leveraging Kinect's capability for making various haptic interfaces. Even though the release of Kinect has still not completed one year, vast applications have already been developed centered around it. Even the present study of work given to us belongs to that category. Even various applications are already existing in the labs/markets which make use Kinect for making assistance technological devices for the visually impaired people. 

Keeping in mind the above stated facts, we are now fully aware that our task would not be to simply emulate what already has been done at various places but to go few steps further to make an innovative product which helps the society to grow.

All excited for the first meeting with Prof. Rahul (Our mentor)

Few days back we got the news that Prof. Rahul Mangharam from UPenn will be mentoring us as part of our ELM@YIF. Since then we have been very excited to meet Prof. Mangharam . Prof. has come up with an initial idea to assuage our thirst for doing research in the areas of HCI, embedded systems and robotics while adding value to the society. The initial idea was as described in the blog: http://kinecthesia.blogspot.com/ . This was a 3.5 months project taken up by undergraduate students at UPenn early this year.

Since our ELM is of 8 months, the task given is to deliver a viable product taking 'Kinecthesia' as our premises. The idea statement has been stated as follows by the Prof. - "To develop a Haptic Belt with the XBOX360 Kinect for visually impaired persons" where the initial technical effort needs to be taken to develop some relatively low-cost technology pieces and integrating them into a socially useful application. Although this is just an initial idea, we hope to do effective brainstorming with the Prof. when we talk to him for the first time. We all are geared up to officially start the project from today. Yippeee :-)

Tuesday, December 6, 2011

Autorun

Although we finished the first phase of viSparsh. There were certain fine tunings required to make the belt wearable. The belt stills needed to be started by executing a file on ubuntu-terminal. To resolve that we needed an autorun file which automatically runs at startup and kickstarts the belt. This can be done by creating an initialization daemon. We have created this daemon and now the belt can run on its own. The detailed process is given below- 


STEP1: Create the autorun script
Create a new file 'auto_visparsh' at Desktop with the commands starting the viSparsh belt as content. Copy following command in this file. 

COMMAND:
cd /root/OpenNI/Platform/Linux-x86/Bin/Release
./SimpleRead.net.exe


STEP2: Copy this autorun file to /etc/init.d 
/etc/init.d contains all scripts which are run on startup and shutdown automatically. The '.d' in 'init.d' represents 'daemon'. Daemon are the processes which are executed in background.

COMMAND:
cd /etc/init.d
cp /home/visparsh/Desktop/auto_visparsh /etc/init.d


STEP3: Choose the run-level
Create a 'soft-link' of the autorun file in one of the run-level directory {rc0.d, rc1.d rc2.d rc3.d rc4.d, rc5.d, rc6.d}. For our purpose choose 'rc2.d'. Don't use 'rc0','rc1' and 'rc6' they contain shutdown scripts. We can also set priority of the process using 'SXX' where XX represents the priority {00<=XX<=99}. The letter 'S' represents 'Start' and can be replaced by 'K' to deactivate a script.   

COMMAND:
cd /etc/rc2.d
ln -s ../init.d/auto_visparsh S99auto_visparsh

STEP4: Restart
Now restart the board. This time belt will start automatically.

Tuesday, November 29, 2011

Test Drive-1: GET, SET……… GO!!!!!!!!!!


After the hard work of one and a half month finally we finished first phase of our project. We have designed the viSparsh v1.0. The belt includes a plastic box which contains PandaBoard, Mbed and Voltage Convertor and the Kinect fixed at the front of the belt. Currently we are using three vibration motors on three sides which we plan to increase to six in version 1.1.

We had our test drive today in our hostel and it worked exceptionally well with no false alarm. Some of the other fellows helped us in shooting the video and becoming the obstacles. Here is the video for this test drive-






Power Supply Issues

After powering the circuit from AC mains, it was time to switch on to batteries as the viSparsh belt would be mobile. So, we purchased a Chinese 12V 2100mAh battery pack to run Kinect and used 7805 voltage regulator to power mbed and PandaBoard with 5V. The batteries were drained out in a couple of minutes and situation got worst after recharging it. Hence, this Chinese battery proved useless for us.
We, then, used 8 AA size Kodak 2100 mAh rechargeable cells. Though it powered our setup for 15 odd minutes but not what we required. Then we got 8 more to power Kinect separately. Still, the power dissipation from 7805 was huge and we thought why not to eliminate that. As one cell is rated 1.2V (though it measures between 1.3-1.4), so we connect 4 cells in series to draw slightly more than 5V from it.
Here is the final number of cells we have used in the project:

1. Mbed:                  4 AA Size 1.2V 1000mAh          Output: Slightly more than 5V
2. PandaBoard:        4 AA Size 1.2V 2100mAh          Output: Slightly more than 5V
3. Kinect                  8 AA Size 1.2V 2500mAh          Output: Slightly more than 10V

Here, we were able to eliminate 7805 voltage regulator from our circuit and hence we minimized the power dissipation.We tested the belt for around half an hour and it worked perfectly with this configuration.
We may add 4 more cells in parallel to power PandaBoard.

Another option we may consider is to order a customized battery with our specifications.

Booting Ubuntu on Pandaboard EA3

STEP1: Download the Ubuntu image
Use this link to download the pre-installed image of ubuntu 10.10. 

STEP2: Prepare the SD card
Insert the SD card in your PC and unmount it. Then use this command to copy the pre-built image on the card.

COMMAND:
sudo sh -c 'zcat ./ubuntu-netbook-10.10-preinstalled-netbook-armel+omap4.img.gz|dd bs=4M of=/dev/sdb; sync'

STEP3: Download and copy required u-boot and MLO files
For Pandaboard version A2 and later follow these instructions. In our case we had a EA3 version so we followed it.  

2.      Untar with "tar -jxf panda.tar.bz2"
3.      Mount the first partition of the imaged SD card
4.      Copy MLO and u-boot.bin (extracted from the tar file) to the mounted partition.

STEP4: Boot from Pandaboard
Now unmounts the SD card and insert it in Pandaboard and give it the power. It shall start booting.

STEP5: Now use the same procedure to install OpenNI and Kinect that we used for BeagleBoard.



Saturday, November 26, 2011

Getting ready to release the 1st version



video

Huhuuu! After facing lots of troubles at every step, we were finally able to get the project working by seamlessly integrating all the components. The video above shows as to how Kinect is able to detect an object (human in our case) and give an alarm depending on the distance between the human and Kinect. In our case, we have used three vibration motors for producing these alert signals. Each motor is used to send an alert for obstacle in one particular direction namely, left, right or middle. These three motors would help a visually impaired person in identifying the direction in which the obstacle is present. We hope to transfer this functionality on the belt in a day or two.This would mark the end of first phase of our project where we were supposed to make a working prototype of the viSparsh Belt.

Wednesday, November 23, 2011

Solving the Response Time Problem


The response time of the data transmitted by the Kinect to the BeagleBoard came to around 4-5 seconds which was quite contrary to our expectations. By 'Response Time' we mean the 'time lag' between two samples of data transmitted by the Kinect to the BeagleBoard.

In order to solve this problem we tried running the same on the PandaBoard and Hurray! the problem got solved. Response time decreased to a few milliseconds. Though both have the 1Ghz processor but BeagleBoard has 512MB DDR RAM whereas PandaBoard has 1GB DDR2 RAM.

Active USB Hub: The way out for our Kinect Detection Problem



To our utter surprise, Kinect stopped working all of a sudden. On running Kinect sample programs, it was not able to detect the device as mentioned in one of our previous posts. However few days back, we were able to find the solution for this. After trying all possible combinations like the software error possibilities etc., we were finally left with trying out the Active USB Hub option. And Hurray!, it actually worked. The problem was that although Kinect runs on a 12V adapter , it still draws ample amount of current from the USB port to which it is connected. Thus, while connecting Kinect to BeagleBoard/PandaBoard, Kinect was not able to draw to draw sufficient amount of current from the board. Thus, on connecting Kinect through the Active USB Hub instead of the board directly, we were able to remove Kinect detection problem.

Saturday, November 12, 2011

Time for Efficiency Optimization


YIF journey has been a life-changer for each of the Young India Fellows. Especially for our team viSparsh, managing a technical project alongside an intensive liberal arts course has been tough. We have been working many days and nights to quickly complete the 1st phase of our ELM (to build viSparsh belt prototype). However, the demanding coursework and other activities at YIF also consumed effective part of our time. Thus, we finally understood the urgency for effective time management for our project which we were not able to strictly abide to earlier. As per the guidance of our mentor Prof. Rahul Mangharam, to start with we decided of chalking out a plan for our coming two weeks. Given below is the work allocation for our next two weeks:
– Resolving Issues coming with Kinect on Beagleboard(Tushar- 16/11/11)
– Completing all installation work( including OS and Kinect) on PandaBoard (Jatin-17/11/11)
– Talking to organizations like Kritical Solutions and Saksham NGO (suggested by YIFP admin) to get inputs from visually impaired people, knowing about their issues and talking to other people already working in this field. (Rolly-30/11/11)
– Making PCB for the project (Jatin-22/11/11)
– Integrating all components (Tushar-26/11/11)
– Making Initial Project with Belt (Tushar-26/11/11)
– Updating Blog, contacting people and other communications (Rolly- on regular basis)
Alongside, this initial work distribution we would be soon making a proper timeline till the end of our project to help us keep better track of the project and work accordingly to meet the deadlines.

Problem Bubbles Popping Out


It seems we are on the board of ‘Snakes & Ladders’. The time when we were about to complete our phase-I another technical issue has bugged us. The Kinect which was working well so far (as posted earlier) is not getting detected with beagleboard now. 
We are getting the follwing error on running a sample Kinect program-
Error:One or more of the following nodes could not be enumerated.
Device: PrimeSense/SensorKinect/5.0.3.4: The device is not connected!
For the last four days we have been struggling with this problem. We have tried various fixes available on the web but none has worked so far. We have even prepared a new SD card with everything installed from scratch but same error persists. We are trying our level best to find any point we have been unknowingly missing so far. However, this had resulted us in push our submission deadlines also accounting the fact that we have term papers and exams on the pipeline.

Monday, November 7, 2011

Battle with Bugs


The entire last was a battle with bugs and installation problems. The OpenNI website has updated the OpenNI files without updating the Sensor files. It created errors during installation and we were forced to read almost every blog and website on web to debug it. We left no stone unturned and finally managed to install the sensor correctly. Here is a detailed description on installation instructions and debugging. We are trying to put all the problems and there solutions at one place so that it can save others time.

STEP 1: Install required packages for Kinect
We need to install certain packages in Ubuntu. These packages, then, would facilitate the installation of OpenNI and Kinect.

COMMAND:
sudo apt-get install git-core cmake libglut3-dev pkg-config 
sudo apt-get install gcc g++ build-essential libxmu-dev 
sudo apt-get install libxi-dev libusb-1.0-0-dev 
sudo apt-get install doxygen graphviz git

ERROR:
libglut3-dev cannot be installed

FIX:
sudo apt-get install git-core cmake freeglut3-dev pkg-config
sudo apt-get install gcc g++ build-essential libxmu-dev 
sudo apt-get install libxi-dev libusb-1.0-0-dev 
sudo apt-get install doxygen graphviz git
In the latest versions ‘libglut3-dev’ has been replaced by ‘freeglut3-dev’ so install it inplace of ‘libglut3-dev’. In case it does not work, install libglut3 from launchpad.net/ubuntu/natty/+package/libglut3-dev.

Although some blogs tell that ‘doxygen’ and ‘graphviz’ are optional, we found that they are required so don’t skip them while installation.

STEP 2: Create a new directory for Kinect

COMMAND:
mkdir ~/kinect
cd ~/kinect

STEP 3: Download OpenNI from the git repository

COMMAND:
git clone https://github.com/OpenNI/OpenNI.git


STEP 4: Install OpenNI

COMMAND:
cd OpenNI/Platform/Linux-x86/Build
make && sudo make install

ERROR1:
Cannot find the metadata file "system.windows.forms.dll"

FIX1:
sudo apt-get install mono-complete
Although, it is weird that linux throws error of windows dll file, the solution is to install ‘mono’. Mono is a platform for running and developing applications based on the ECMA/ISO Standards.

ERROR2:
No access permission for install.sh and RedistMaker.

FIX2:
The error indicates that the install.sh and RedistMaker files do not have execution permission. Therefore give them permission to execute.
cd ../CreateRedist
sudo chmod +x install.sh RedistMaker
cd ../Build

ERROR3:
CommonMakefile does not exit

FIX3:
sudo apt-get install mono-complete

ERROR 4:
“arm-angstrom-linux-gnueabi” does not exist.
The error is generated as we are on ARM Platform and the Platform.Arm file tries to access gnueabi file for Angstrom (arm-angstrom-linux-gnueabi) which does not exist in Ubuntu.

FIX 4:
cd Common 
mv Platform.Arm Platform.Arm.BAK
cp Platform.x86 Platform.Arm

ERROR 5:
Unrecognized command line option "-malign-double"
Unrecognized command line option "-mmse2"

FIX 5: Edit the Platform.Arm file in ~/kinect/OpenNI/Platform/Linux-x86/Build/Common

In the latest version of OpenNI, the ARM platform has been included by the name Linux-ARM. However if we go into ~/kinect/OpenNI/Platform/Linux-ARM/Build/ and try to build it you’ll face errors thus better would be to use the old Linux-x86 platform files with some modifications.

Open ~/kinect/OpenNI/Platform/Linux-x86/Build/Common/Platform.Arm and ~/kinect/OpenNI/Platform/Linux-x86/Build/Common/Platform.x86 files to comment out these lines:

  CFLAGS += -malign-double
  and
  ifeq ($(SSE_GENERATION), 2)
        CFLAGS += -msse2
  else
        ifeq ($(SSE_GENERATION), 3)
               CFLAGS += -msse3
        else
               ($error "Only SSE2 and SSE3 are supported")
        endif
  endif

We need to do this because ‘ -malign-double’ and ‘-msse3’ flags are valid only for x86 platform and will not work for gcc on ARM. Commenting out these statements makes the code usable for ARM.


STEP 5: Download Kinect driver

Most of the blogs mention https://github.com/boilerbots/Sensor.git to get the Kinect sensor’s files. These files are not updated and generated many errors and bugs. Use https://github.com/avin2/SensorKinect which is the updated version and do not produce these errors.

COMMAND: 
cd ~/kinect/
git clone https://github.com/avin2/SensorKinect
cd SensorKinect


STEP 6: Install Kinect driver
cd Platform/Linux-x86/Build
make && sudo make install

ERROR1:
CommonMakefile does not exit

FIX1:
sudo apt-get install mono-complete
Now try building again. If the method does not work do this-
gedit ~/kinect/SensorKinect/Platform/Linux-x86/Build
Replace ‘LIB_USED’ by ‘USED_LIBS’ and build again. You may find that the name is already ‘USED_LIBS’ in that case follow the below given instructions.

Latest versions of OpenNI have changed the name of the common file as CommonCppMakefile. How most of the Sensor files have not yet been updated. Therefore, execute this command to create a soft link copy of CommonCppMakefile with the name CommonMakefile.
cd /usr/include/ni
ln –s  ./CommonCppMakefile ./CommonMakefile

If it does not work open /usr/include/ni and search for these file-
CommonCppMakefile
CommonDefs.mak
CommonTargets.mak
CommonCSMakefile
CommonJavaMakefile
Platform.CE4100
Platform.x86
Platform.Arm
The OpenNI probably hasn’t created these files. Therefore, you need to copy them here. Execute these commands-
cd ~/kinect/OpenNI/Platform/Linux-x86/Build
cp Common/* /usr/include/ni

STEP 7: Now use your kinect
Connect the kinect with the beagleboard and run sample programs-
cd ~/kinect/OpenNI/Platform/Linux-x86/Bin/Release
./SampleNiRead


ERROR :
InitFromXml failed: Failed to set USB Interface!

FIX :
sudo rmmod gspca_kinect


    

Bootstrap


Here are the instructions to download an image of Ubuntu Maverick and installing it on SD/MMC card. You may choose any Ubuntu flavor as you wish. We have chosen Maverick 10.10 here. Download any other flavor of Ubuntu from here: http://rcn-ee.net/deb/rootfs/maverick/.

STEP 1: Download and extract Ubuntu Maverick 10.10
wget http://rcn-ee.net/deb/rootfs/maverick/ubuntu-10.10-r7-minimal-armel.tar.xz
 
tar xjf ubuntu-10.10-r7-minimal-armel.tar.xz
cd ubuntu-10.10-r7-minimal-armel

STEP 2: Look for the MMC/SD card
sudo ./setup_sdcard.sh --probe-mmc

This would result in something like this-
Are you sure? I Don't see [/dev/idontknow], here is what I do see...
 
fdisk -l:
Disk /dev/sda: 500.1 GB, 500107862016 bytes <-x86 Root Drive
Disk /dev/sdb: 7957 MB, 7957325824 bytes    <-MMC/SD card
 
mount:
/dev/sda1 on / type ext4 (rw,errors=remount-ro,commit=0) <-x86 Root Partition
 Here you can see that /dev/sdb matches your MMC card specification.

STEP 3: Partition the MMC/SD card and install the Ubuntu image
sudo ./setup_sdcard.sh --mmc /dev/sdb --uboot beagle

Version
--uboot option
BeagleBoard Bx
--uboot beagle_bx
BeagleBoard Cx, xM, A/B/C
--uboot beagle
Panda Board
--uboot panda

STEP 4: Insert the MMC/SD card in SD card slot at BeagleBoard, connect the monitor and boot it. Use these login details:
Username: ubuntu 
Password: temppwd

STEP 5: Connect to Internet and install GUI and other necessary packages
sudo dhclient eth0
sudo apt-get update
sudo apt-get install xubuntu-desktop
sudo apt-get install xfce4 gdm xubuntu-gdm-theme xubuntu-artwork 
sudo apt-get install xserver-xorg-video-omap3 network-manager
We also used ‘sudo apt-get upgrade’ in this step 5. It took several hours and resulted into non graphical interface. Therefore, there is no need to do that here. 

STEP 6: Restart the BeagleBoard.


Friday, November 4, 2011

PandaBoard- All thanks to Texas Instruments

Yes!!! Now we have with us the power of PandaBoard and all thanks to Texas Instruments for providing us the board for our project and guess what..it's sponsored by them.

After half an hour long conference with Vikas Joshi representing TI, he sent to us PandaBoard which will give a special edge to our project. We have already installed Ubuntu OS image on it and we are heading towards our next step to complete Phase I of the project.

Thank You Texas Instruments!!!

Kinectised the BeagleBoard

Finally, we have "Kinectised" the beagleboard. After working for more than a week to install OpenNI and Kinect sensor on the beagleboard things have finally worked out for us.

OpenNI though initially  showed some errors while installing but it was done in a couple of days.
Then we tried the methods given on N number of sites on the internet, still we were not able to install Kinect Sensor. Ultimately, we had to use our brain and employ some tricks to get that working. For this we understood it first by installing it on Linux installed on our laptop and then going through the files and the functions handled by them. Then we moved on to beagle board to resolve the error. Detailed process will be posted after we complete the next phase.


So, now the game is ON and we are just waiting for the delivery of Logic level converters and vibrations motors which we have ordered from Sparkfun. Stay tuned for the next updates!!



Tuesday, October 25, 2011

A new ray of hope amidst Diwali festivities


Huh! After days of searching for the PandaBoard, we finally have found a ray of hope. As mentioned in the previous post we had planned to buy PandaBoard for the project. However, non-availability of it in the market gave us a setback. In order to meet the deadlines we had to start our project with BeagleBoard till the time we found a PandaBoard for our project. We were sure that the unmatched feature set and ease of usage of the PandaBoard would help us in developing a good product quickly. Due to these reasons we kept searching for all the possibilities of getting a chance to work on the PandaBoard. Hurray! We finally have got a new hope from Texas Instruments who have given a positive signal for lending a PandaBoard to us atleast till the end of our ELM. We are excited to try our hands on the same. We wish Diwali brings good news from across the corners for you all too. Happy Diwali Guys and Gals :-) .

Sunday, October 23, 2011

Serially the way out


In order to transmit the data from BeagleBoard to Mbed microcontroller to actuate the vibration motors we need to transmit data serially. Hence we successfully tested serially full duplex communication from Linux (x86 processor) to other PC terminal. Next target is running the Kinect and OpenNi on Ubuntu (we have already succesfully checked the same on Windows7).

Down to the basics


Next, we succesfully tested serial transmission and reception from mbed(through 9th & 10th serial port pins). 

For testing data transmission, we made use of ebox (as we didn't had PC) that was already available with us. To convert the voltages( ttl to RS-232), we had to make use of MAX 232. The circuit looked as follows:
It must be noted that the PCB shown in the diagram on the right is used only for the usage of MAX232 IC in that circuit(since we didn't have spare IC).

Parallel Action on mbed



While other stuff was being done, we registered mbed online and started exploring it using its online available compiler. We tried simple LED programs and PWM and they worked without any problem.

Getting to the technicalities



As mentioned in the previous post, we faced certain issues with Ubuntu. We had different concerns for Natty and Maverick. In case of Natty, our main concern was the extra amount of space being taken up by the OS that we had to eventually buy another 8GB SD Card. Thus, as we figured out the main problem with installing Natty is the space constraint.
In case of Maverick, even after strictly following the steps given in the wiki link:https://wiki.ubuntu.com/ARM/OMAPMaverickInstall



We were not able to successfully install it. We were following the installing procedures for BeagleBoard xM ver A3 and later. However, later we discovered that although the procedure for all other versions of BeagleBoard xM after A3 is same, ver C is an expection to it. We figured out that since our BeagleBoard xM was Rev C, we had to adopt different procedure instead of the normal procedure (also mentioned in above link) as an exception.

Time to explore :-)


Since we had worked on Windows CE7 before, we thought of exploring other Operating Systems. Spending some time on this process was necessary as we want to test the pros and cons of various OS before choosing one.

We tried installing and booting the OS on BeagleBoard for testing purposes.


1. Angstorm: : We started with Angstorm with worked perfectly fine without creating any issues.


2. Ubuntu Flavors: Next we applied the same procedure of testing the image on Natty and Maverick separately. We came up with certain difficulties in this (discussed in the next post). However, we were able to resolve them soon.

3. Android: Considering the Android revolution going on at present with high graphic capabilities, we thought it also try our hands on it. This would help us later in integrating our product with State of the Art application, features.

Thus, we were finally able to successfully test install all main OS on BeagleBoard. During this process we learned that for an OS to successfully boot this process is needed-

1. Partitioning the SD card into two parts-

             SD card            : /dev/sdb  [4 GB]
______________________________________________
     Name                     Path              Space                 Type
______________________________________________
boot partition    |       /dev/sdb1     |    74 MB         |   FAT32
rootfs partition  |       /dev/sdb2     | Rest Space      |   EXT3
______________________________________________

2. Copying appropriate files into these partitions-

__________________________________________________
     Name                     Path                           Files
__________________________________________________
boot partition    |       /dev/sdb1     |   MLO, u-boot.bin.ift, uImage
rootfs partition  |       /dev/sdb2     |   vmlinuz
__________________________________________________


These four files are the main files for BB-xm we need 'boot.scr' too.

What's in the name ...

After considering various options we had earlier come out with 'viSparsh' as the name of our project. This word has been coined by our team. It is a conglomeration of two words namely vision (Vis) & Touch (Sparsh). We felt the name ViSparsh truly depicted the purpose of our project which was to aid the visually impaired people through the sense of Touch. We also thought this as an apt name for our team too whereby the same word would signify our team's aim of touching the lives of several people through our vision. 

Since we have also built a small lab for the first time at YIF we thought of giving it a name too. One of our fellows (Mr. Balaji) helped us in searching a name and we all finally agreed on 'e-Capsule Lab' as the name of our laboratory. The reason for this were manifolds. Firstly, the size of the lab was that of a capsule. Secondly, 'e' would signify electronics in the word 'e-capsule'. Thirdly, we aim to make electronic solutions for the society with the same aim as capsules are made in the medicine field for the betterment of mankind. Thus, we thought this name to be apt for our lab. We hope that future students of YIF would carry forward the name and build great solutions under this lab that would help our society in path-breaking ways.

Thursday, October 13, 2011

The Road Ahead

Some of the other things that we also have started working on are follows:
- We are exploring the options for connecting Internet to the BeagleBoard. Since it doesn't have an inbuilt WiFi, we have decided to go with LAN connection at present. However, since router is placed at some distance, we have to make 20m length cable for our usage.
- For a day or two, we are testing various OS platforms to search the best option suiting our requirements. We have already successfully tested on Angstrom which worked perfectly fine. Our next targets are of Ubuntu Maverick and Android which we hopefully aim to finish soon.
- We have started exploring Kinect to test its various parts. We even did Kinect tear-down in order to gain better understanding of the product.

Crossing Barriers

As was the case with the non-availability of PandaBoard so was with mbed. We were in a crisis situation as one of the main components (mbed) was again out of stock. However, we didn't lose hope and started figuring out other possibilities of getting a mbed to work on. While the search was still on, our teammate Tushar gave us a surprise that we had actually won a free mbed (form was secretly filled by him). We took a sigh of relief as we had crossed another barrier and saved some money :-) . As a back up we had earlier thought of buying 2 mbeds. Luckily we were fortunate enough and after few days mbed was again in stock. Since one of the required two mbeds was already won (:-) ), we thought of placing the order for the 2nd one for our safety.

After we received the mbed yesterday, one of our tasks was to get accustomed to the product as we all three were new to it. We successfully implemented some serial transmission codes through the mbed online compiler.

Slight modification in components

As mentioned in our previous blog, we had initially planned of buying PandaBoard considering the project's future advancements to make use of WiFi/Bluetooth and other upgraded hardware configuration requirements However, things never work as planned :-( and so was the case with us. There was no PandaBoard availability in India before 15th Nov. '11. Our time was passing by swiftly and surely we could not afford to wait so long. In order to keep our deadlines we had to resort to buying 1 BeagleBoard instantly for the initial work while waiting for the PandaBoard's availibility.

Requirement to change the Monitor

We had initially bought Benq 18.5" TFT (LED) Monitor which had only VGA port. However, when Beagleboard arrived we figured out that it would not be a feasible idea to convert HDMI to VGA (Beagleboard-xm had HDMI connectivity while our Benq monitor supported VGA). Fortunately, the vendor agreed to replace the Benq Monitor with Samsung LCD monitor which supported both DVI-D and VGA. In order to use BeagleBoard with this new monitor we simply had to use a HDMI-DVI-D converter. We even checked XBOX360 connectivity with the Monitor and it worked perfectly fine. Thus, finally we found the appropriate monitor for the project.

Parts List

Name of the main components required in the initial phase:
1. Microsoft X-Box 360 4 GB Kinect Bundle
2. PandaBoard
3. mbed - LPC1768 Development Board
4. Vibration Motors (ROB-08449)
5. Logic Level Converter (BOB-08745)

Get Set Go.....

After few weeks of hustle bustle, we were finally able to kick start our project formally on 10th October 2011 when we also began with our 4th term of the program. After the first skype session with our mentor some of our main priorities were figured out to be as follows:

- The first target was to get budget sanction for our project which took about 10 days of time as this was the first time an ELM collaboration was happening between UPenn and YIF. Our budget was finally sanctioned on 23rd Sept. 2011.

- Our second target was to set up a lab at YIF. Since YIFP is in the first year of it's launch, we had space constraint issue. After pondering over various possibilities instead of lingering on, we finally thought to make temporary arrangements for the lab in one of our teammates' hostel room.

- On a parallel track we prepared the part list, did some market survey, figured out the best buy prices and ordered them. As almost all the parts were needed to be imported (required shipping), it again took some of our time in starting the project. Since in between the festive season also began and there was holiday between 28th September- 9th October '11, a further delay happened in obtaining the components. Overall this part also took around 10 days.

- Till the time these components were arriving we went to the local market to buy the miscellaneous items like solder iron, wire, extension board, LAN cable etc required to set up our lab.

- Initially as per our market survey we had decided on buying an ASUS laptop so as to best suit our requirements. However since we had a Samsung laptop (being given to each of the fellows by YIF), for starting purposes we thought of buying Monitor, Keyboard and Mouse separately instead of a new Laptop( with Graphics Card option etc) which costed us less.




Saturday, September 17, 2011

Use Case Scenario

In the typical use case, a visually impaired person would buckle the viSparsh belt around his waist. The belt would, then, guide him to navigate. Whenever an obstacle comes into the path, viSparsh would inform him for the presence of an obstacle through vibration motors. The viSparsh belt would utilize the RGB image and Infrared depth data captured by Kinect to decide the incoming obstacle and its direction. Suppose if there is an obstacle in the right front of viSparsh user the belt would generate vibrations in the right side. The user would move in the opposite direction to avoid the obstacle.


This typical use case scenario is represented below through a figure-




Wednesday, September 14, 2011

First meeting with Prof Mangharam :)


Skype Meeting with Prof. Rahul Mangharam 


12-Sept-2011 
6.30 PM IST (9.30 AM EST)
  •  Prof. Magharam directed us to take our ELM project work at YIF as a challenge in the form of a start-up company. He thereby suggested not only to emphasize on the technical aspects of the project but also to get exposed to aspects such as Market Research, User Experience, Case Studies etc.
  • We were asked to immediately set up a lab at YIF to start our project work. For this we were asked to make and share a part list required for starting this project at the earliest.
  • It was mentioned that after the completion of this initial task we would be required to write a second summary of how this product could be made viable by making it cost effective, working on the slim look, integrating with other solutions etc. Prof. asked us to give importance to the usability aspects of the product also as there is no point making a technological product unless it fulfils the demand of the users.

v

Future Vision & Impact


We have been allotted the time span of four weeks to build this prototype inclusive of simulation and real time testing of the prototype. This would be submitted to Prof. Mangharam by a video demonstration.

Then we would be focusing on making it a more viable productized version. This includes developing customized version depending on the user requirements. For example, developing low cost solution with only necessary features would serve the people living in rural areas. And designing relatively high cost product with various additional features like GPS tracking, education and entertainment related application, etc. for the user who can afford to pay for more advanced version.

We firmly believe that the final product will certainly add a social value which would help the visually impaired to move from the shackles of their dependencies. It would create a sense of freedom in them and give them zeal towards living a happier life. It would further make them believe that too have equal opportunity as a normal person to grow and achieve. Thus, overall this product would create a high impact on the society. 



Sunday, September 11, 2011

Initial Findings

On being given the initial idea, we started to find more about the latest inventions going on these fields. After some study we found that all the big universities across the world have a haptics division which are pursuing research in the domain of our present interest including MIT Media Labs, Stanford etc. Further, since the release of XBOX360 Kinect in November last year, almost all these haptic departments have started projects for leveraging Kinect's capability for making various haptic interfaces. Even though the release of Kinect has still not completed one year, vast applications have already been developed centered around it. Even the present study of work given to us belongs to that category. Even various applications are already existing in the labs/markets which make use Kinect for making assistance technological devices for the visually impaired people. 

Keeping in mind the above stated facts, we are now fully aware that our task would not be to simply emulate what already has been done at various places but to go few steps further to make an innovative product which helps the society to grow.

All excited for the first meeting with Prof. Rahul (Our mentor)

Few days back we got the news that Prof. Rahul Mangharam from UPenn will be mentoring us as part of our ELM@YIF. Since then we have been very excited to meet Prof. Mangharam . Prof. has come up with an initial idea to assuage our thirst for doing research in the areas of HCI, embedded systems and robotics while adding value to the society. The initial idea was as described in the blog: http://kinecthesia.blogspot.com/ . This was a 3.5 months project taken up by undergraduate students at UPenn early this year.

Since our ELM is of 8 months, the task given is to deliver a viable product taking 'Kinecthesia' as our premises. The idea statement has been stated as follows by the Prof. - "To develop a Haptic Belt with the XBOX360 Kinect for visually impaired persons" where the initial technical effort needs to be taken to develop some relatively low-cost technology pieces and integrating them into a socially useful application. Although this is just an initial idea, we hope to do effective brainstorming with the Prof. when we talk to him for the first time. We all are geared up to officially start the project from today. Yippeee :-)