Welcome to our ECE 3400 website! Over the course of the semester we have been hard at work building an autonomous robot that can:
Follow lines, detect walls, and navagate a maze
Use IR to detect and avoid other robots
Detect treasures using a camera and FGPA image processing
Report all this information back to a base station GUI
Feel free to take a look around to see the progress we have made and how our robot has evolved.
Click any of the following links to see what we were up to in each of our labs and milestones.
Junior, CS Major, ECE Minor.
Junior, ECE and CS Double Major.
Junior, ECE Major, CS Minor.
Junior, ECE Major, CS Minor.
ECE 3400, Semester #FA18 Team # 11
Team Members: Daniel Weber, Adam Wojciechowski, Shivansh Gupta, Vignesh Nandakumar
Team meetings will be in Upson Hall 142 on Fridays from 11:15 - 12:05
Slack will be the primary mode of communication
Decisions will be made by consensus
We will rotate meeting roles every week based on who the team leader is that week. We will have a record keeper, agenda setter, and a meeting coordinator. The agenda will be written and disseminated over Slack the night before the team meeting
The record keeper will rotate every meeting. They will type the minutes during the meeting and disseminate them via Slack afterwards
Every person on the team will have to take the role as a leader. The role of the leader will be to organize meetings and make sure that everything is submitted in a timely manner. Please note here who will be responsible when:
Week 1-4 (Start-up, Lab 1, Milestone 1): Shivansh
Week 5-8 (Lab 2, Lab 3, Milestone 2): Vignesh
Week 9-12 (Lab 4, Milestone 3): Adam
Week 13-16 (Milestone 4, competition, final report): Daniel
a) I participated in formulating the standards, roles, and procedures as stated in this contract.
b) I understand that I am obligated to abide by these terms and conditions.
c) I understand that if I do not abide by these terms and conditions, I will suffer the consequences as stated in this contract.
1) Adam Wojciechowski date: 8/31/2018
2) Vignesh Nandakumar date: 8/31/2018
3) Shivansh Gupta date: 8/31/2018
4) Daniel Weber date: 8/31/2018
In Lab 1, we had to construct several simple circuits using multiple external components and the Arduino Uno microcontroller. With the Arduino microcontroller and the Arduino IDE, we constructed several simple Arduino programs, which culminated with us building our first robot and having it drive autonomously in a figure eight pattern.
The first task was to use the arduino to blink an internal LED. The on-chip LED is hard-wired to pin 13 which we set up as an ouput using the following code:
#define PIN_NUMBER LED_BUILTIN
void setup() {
pinMode(PIN_NUMBER, OUTPUT);
}
We then turned the LED on and off by setting it to high and low respectively:
void loop() {
digitalWrite(PIN_NUMBER, HIGH);
delay(1000); // waits for a second
digitalWrite(PIN_NUMBER, LOW);
delay(1000);
}
Demo:
Our next task was to modify the code for task one to blink an external LED. We did this by connecting an external LED to a digital pin through a resistor as shown in the diagram below:
All we did in the code for this task was to change our Pin Number from the on-chip LED to the pin that we connected to the external LED:
#define PIN_NUMBER 0
The resistor was used to prevent sourcing too much current through the pin in case of a short circuit, which would burn out both the Arduino and the external LED.
Demo:
Our third task was to detect changes in the resistance value of a potentiometer through an analog input pin and print the value to the serial monitor. This was accomplished by setting up a voltage divider circuit as shown below and connection the output of the circuit to an analog pin:
We then used the following code to set A0 as our analog pin and setup the serial monitor:
int potIPT = A0;
void setup() {
Serial.begin(9600);
}
Using the analogRead function, we were able to read the voltage value from the analog pin and output that value to the serial monitor every half second:
void loop() {
int val = analogRead(potIPT); // read the input
Serial.println(val); // print to serial monitor
delay(500); // wait half a second
}
Combining the setups of Tasks 2 & 3, we designed the following circuit which allowed us to use the analog signal input from the potentiometer to control the brightness of the external LED:
The LED could only be connected to one of the 6 digital pins which could output a PWN signal, which was required to simulate the effect of an analog voltage. The digital pin was set up as an output exactly as in Tasks 1 & 2. Inside the loop, we used the analogWrite function to output the analog voltage to the digital pin:
void loop() {
int val = analogRead(potIPT); // read the input pin
analogWrite(PIN_NUMBER, map(val, 0, 1023, 0, 255));
}
The 10-bit ADC (Analog-to-Digital Converter) is able to input analog values from 0 to 1023, however the LED is only able to input voltage values from 0 to 255, so we used the map function to make sure that we didnt exceed the voltage limits of the LED.
Demo:
Our final task before assembling our robot was to use the PWM signal we generated in the previous task to drive and control a Parallax Continuous Rotation Servo. We did this using the Arduino Servo.h library. First we set up the servo and the serial monitor.
#include <Servo.h>
#define PIN_NUMBER 3
Servo myservo; // create servo object to control a servo
int potIPT = A0;
void setup() {
myservo.attach(PIN_NUMBER);
Serial.begin(9600);
}
We then output the analog voltage of the potentiometer setup through the digital pin, as in Task 4, to control the speed and direction of the servo, with the only difference being that the range of voltage values for the servo was from 0 to 180 instead of 0 to 255.
Demo:
The parts we used to assemble our robot were:
The first thing we did was mount the servos to the servo mounts. As there was a shortage of screws and nuts in the lab, we only used two screws per servo instead of the four there are room for. The robot built in this lab will not be the same as the final robot, so we just needed enough screws to maintain structural integrity.
The servos were then mounted to the baseplate, using two screws per mount.
The wheels were then mounted to the servos. As one of our servos did not have a screw, we used wheels that fit snug on the servos. This is an issue that we will need to fix when designing later robots.
Next, we mounted the Arduino on top of the baseplate using two screws.
Finally, we attached the ball bearing wheel to the front of the robot.
Once the robot was built, we wrote code for the robot to autonomously drive in a figure eight pattern.
The 2 servos and digital output pins were set up as in the previous tasks and connected using the attach function:
void setup() {
leftWheel.attach(LEFT_WHEEL_PIN);
rightWheel.attach(RIGHT_WHEEL_PIN);
}
To control the movements of the robot, we had 3 basic functions: moveForward, moveRight, and moveLeft. The moveForward function worked by turning both wheels in the same direction at the same speed. The moveRight function worked by turning the left wheel forward while at the same time turning the right wheel backwards in order to pivot the robot and make it turn right. The moveLeft function worked in the same way but with the directions reversed.
void moveForward() {
leftWheel.write(FORWARD_POS);
rightWheel.write(FORWARD_POS);
delay(straightTime);
}
void moveRight() {
leftWheel.write(FORWARD_POS);
rightWheel.write(BACKWARD_POS);
delay(rightTime);
}
void moveLeft() {
leftWheel.write(BACKWARD_POS);
rightWheel.write(FORWARD_POS);
delay(leftTime);
}
Using these three funcitons in the following order would make the robot drive autonomously in a figure eight pattern:
void loop() {
moveStop(); //stops both wheels
moveForward();
moveLeft();
moveForward();
moveRight();
moveForward();
moveRight();
moveForward();
moveRight();
moveForward();
moveRight();
moveForward();
moveLeft();
moveForward();
moveLeft();
moveForward();
moveLeft();
}
The first time we tested the robot, it became apparent that one of our servos was spinning much faster than the other; so much faster that the robot was practically impossible to control.
We performed a series of tasks in order to identify the problem. First, we swapped the pins for our two servos. This would check to see if the issue was in the software rather than in the hardware. After switching the pins had no effect, we decided to try a third servo. The third servo was much closer to the speed we were expecting, so we switched out the super fast servo. After this switch, we were able to get out robot to drive in straight lines and in a proper figure eight pattern.
Demo:
You can find the full that we used for the lab here
To build the acoustic setup, we first used the diagram given to us in the lab handout to setup the microphone. We then confirmed that the microphone was working by hooking up the oscilloscope to it and then playing a random tone. Originally, we were only measuring a peak-to-peak voltage of about 20 mV which would be too low to read directly on the Arduino. We decided to build an amplifier for our microphone circuit.
We first decided to connect the microphone to an inverting amplifier, but quickly discovered it to not work. Since our initial microphone circuit had a high DC offset, it railed the op-amp and the signal didn’t get amplified. To fix this, we looked into creating an amplifier that had more capacitors to get around are unusual DC offset. We used an LM358P op-amp and initially used the values shown below. However, we had to change a few of the component values to optimize it for our circuit.
To get rid of all the harmonic signals generated from playing a tone, we added a low-pass filter that would only let signals below 700 Hz pass through. We used a 22 KOhm resistor with a 10 nF capacitor to accomplish this. The filter will also be useful in a noisy environment where other frequencies will be present.
Here is what the waveform looks like when the 660 Hz tone is played.
In the future, we may want to adjust our microphone amplifier more to get a cleaner output signal so it is easier to read on the arduino. We are also considering the option of using an external chip to do our FFTs so that our Arduino does not need to waste time doing this and can instead work on other important computations.
Here is a picture of the materials we were given:
We first built the simple series circuit shown in the online lab writeup. We then confirmed that the simple circuit was able to detect other robots by hooking up the IR hat to the power supply and powering it with 9V as specified in the lab writeup. As you can see from the oscillosope image of the waveform generated by the IR sensor, we were able to verify that the IR hat was indeed outputting a 6.08kHz signal. However, the waveform was definitely not sinusoidal, so we knew there would be plenty of error harmonics.
After we accomplished this, we decided to add a simple RC low pass filter to our circuit in order to filter out frequencies higher than our sampling rate (which could “look” like the 6.08 KHz signal after performing FFT). We decided to set the filter cutoff frequency at about 7 KHz which should let the IR hat signal through but exponentially dampen every frequency higher than that. We ended up using a 1 nF capacitor and a 22 KOhm resistor for the low pass filter.
We also noticed that the voltage in the IR signal could be quite low when the IR hat was more than a few inches away. Therefore, we decided to build a simple non-inverting op amp with a target gain of around 2. We initially had some difficulties getting the non-inverting op amp to work correctly. At first we used the LF353 in an non inverting configuration but we never saw an amplified signal on the op amp; only a constant voltage at around 4.3 volts (not quite the voltage on the Vcc rail). After trying a few different 353 ICs, we tried switching to the LM358 op amp and our amplification circuit immediately worked. We are still unsure why the 353s weren’t working. We decided to use an 12 KOhm resistor as R1 and a 18 KOhm resistor as R2, giving a gain of about 1+18/12 = 2.5.
We then gathered FFT data from the oscilloscope for when an IR hat was placed near the sensor. As expected, there is a definative fundamental frequency at around 6 kHz and many error harmonics at higher frequencies.
We compared that data with the FFT data for when a decoy was brought close to the sensor and noticed a strong fundamental frequency at around 18 kHz.
The final circuit combines the op-amp with the low pass filter: And here is the fully constructed circuit on a breaboard:
Finally, we used the FFT library for the Arduino to measure the IR and audio signal from port A0 and generate the amplitudes in the frequency bins using the following code:
for (int i = 0 ; i < 512 ; i += 2) { // save 256 samples
while(!(ADCSRA & 0x10)); // wait for adc to be ready
ADCSRA = 0xf7; // restart adc
byte m = ADCL; // fetch adc data
byte j = ADCH;
int k = (j << 8) | m; // form into an int
k -= 0x0200; // form into a signed int
k <<= 6; // form into a 16b signed int
fft_input[i] = k; // put real data into even bins
fft_input[i+1] = 0; // set odd bins to 0
}
fft_window(); // window the data for better frequency response
fft_reorder(); // reorder the data before doing the fft
fft_run(); // process the data in the fft
fft_mag_log(); // take the output of the fft
sei();
Serial.println("start");
for (byte i = 0 ; i < FFT_N/2 ; i++) {
Serial.println(fft_log_out[i]); // send out the data
}
We also added some additional setup code to have more control over how fast the ADC sampled data:
TIMSK0 = 0; // turn off timer0 for lower jitter
ADCSRA = 0xe7; // set the adc to free running mode
ADMUX = 0x40; // use adc0
DIDR0 = 0x01; // turn off the digital input for adc0
The 7 at the end of the ADCSRA
corresponds to the ADC clock prescalar, which determines how fast it samples data. This is important to tune so that you get proper frequency resolution for the frequencies that you are interested in. For example, we used a prescalar of 7 for the acustic FFT and a prescalar of 6 for the IR fft since the IR signal was at a much higher frequency (therefore we need a higher sampling rate).
We then used some matlab code to read the fft bin data from the arduino and plot the bin number vs. bin amplitude. This allowed us to visually inspect the result of the FFT analysis and figure out the bin number that corresponded with the signal and a reasonable threshold for the bin in order to distinguish the signal from the noise.
myserialport = serial("/dev/cu.wchusbserial1410", "BaudRate", 9600)
fopen(myserialport)
binNums = [0:127];
bins = zeros(1,128);
try
while 1
start = fscanf(myserialport,"%s");
if start == "start"
break
end
end
for i = 1:128
bins(i) = fscanf(myserialport,"%d")
end
catch ME
warning('error occured');
end
fclose(myserialport)
plot(binNums, bins)
We used matlab’s seriallist
to get a list of the ports we were using and figure out which one the arduino was connected to.
We ran the matlab code while the arduino was connected to the IR sensor and got the following plot when no IR hats were nearby:
We then ran the matlab code with an IR hat a few inches away from the sensor and got the following plot:
From that plot we were able to determine that the primary bin we should be checking was bin number 84 and that we should set the threshold for detecting another IR hat at about 60. We then incorporated this thresholding with an LED in order to visualize when the arduino detects another robot. Here is a video of the full IR circuit where the red LED shows if the arduino has detected another robot:
We then repeated the data gathering procedure for the audio signal and were able to determine that the bin number should be 18 with a threshold of about 180:
We then merged the two code bases together by first doing the audio FFT analysis and then once the 660 Hz signal had been identified it would start detecting IR hats. We used a flag to switch between the two modes and run the ADC at different speeds:
if (detectingAudio) {
ADCSRA = 0xf7; // restart adc (128 prescalar)
}
else {
ADCSRA = 0xf5; // restart adc (32 prescalar)
}
We then added some control code to detect the 660 Hz audio signal on pin A0 and then switch the state of the arduino to detect the IR hat on pin A1:
if (detectingAudio) {
if (fft_log_out[micBinNum] > micThresh) {
detectingAudio = 0;
ADMUX = 0x41; // switch to ADC1
}
} else {
if (fft_log_out[irBinNum] > irThresh) {
digitalWrite(7, HIGH);
} else {
digitalWrite(7, LOW);
}
}
A demo of this code can be seen here:
As you can see, the arduino doesn’t start detecting the IR hat until the 660 Hz tone has been played.
You can find the full code that we used in this lab here
The radio team worked on a simulated robot and interfacing with the GUI.
First, we had to download the radio library and set it up for the Arduino IDE. We then downloaded the GettingStarted sketch and replaced it with the one given to us in the RF24 library. The radios were then connected to the arduino and powered through the DC power supply as our arduinos could not supply enough current. Each radio was given a channel number based on the formula given and then we followed the code from the GettingStarted example.
We had to decide on a data structure to store to store the contents of the maze in. For now, we have decided to implement an unsigned char array to store our data as this gives 8 bits to store information about every square. 4 bits are used to represent wall directions (North, East, South, and West, in that order), two bits for treasure shape (3 possible shapes plus no treasure), one bit for treasure color (red or blue), and one valid bit (has the robot explored this square yet?).
First, we generated a maze similar to the 3x3 example given in the GUI release code using our unsigned char array data structure. Initially, we hard coded the movements of our simulated robot just to make sure the data from the array was being read properly. First, the robot put it’s current coordinates in a string. Based on the 4 wall bits, the robot then appended different messages to a string (ex: 1100 -> “North=true,East=true”) and printed it out via the serial port. The Python GUI code then reads the serial output and updates the GUI appropriately. Here is our code for taking the data received from the robot and generating the serial message:
String convert(unsigned char x, unsigned char y, unsigned char curr){
String s = String(y)+","+String(x)+",north=false";
if(bitRead(curr,4)==1){s = s + ",west=true";}
if(bitRead(curr,5)==1){s = s + ",south=true";}
if(bitRead(curr,6)==1){s = s + ",east=true";}
if(bitRead(curr,7)==1){s = s + ",north=true";}
return s;
}
Once we verified that this was working, we added a right-hand wall following algorithm to our simulated robot so that it could autonomously run through the entire programmed maze. Here is the code we wrote to achieve this:
// get the current position's maze data and perform Right hand following
void follow(){
unsigned char curr = maze[y][x];
if((bitRead(curr, dir+4)==0) && (bitRead(curr, (dir+1)%4 + 4)==1)){
adv();
}
else if(bitRead(curr, (dir+1)%4 + 4)==0){
dir = (dir+1)%4;
adv();
}
else if((bitRead(curr, dir+4)==1) && (bitRead(curr, (dir+1)%4 + 4)==1) && (bitRead(curr, (dir+3)%4 + 4)==0)){
dir = (dir+3)%4;
adv();
}
}
Based on the current walls in front of the simulated robot, the robot would either choose to turn left, right, or continue straight. To verify that the simulated robot was indeed right-hand wall following, we created a square maze and had the robot start on the edge. In theory, the robot should never explore the middle of the maze and that is what we observed.
After we had the simulated robot working on the same arduino as the base station communication, we tested sending the maze information between arduinos using the Nordic RF modules. We decided on a 2 byte data packet. The first byte contained the x and y coordinates of the robot, while the second byte is identical to the data stored in the unsigned char array. We added code to deconstruct and translate the data packet on the base station.
Here is a demo video of the simulation. The arduino connected to the computer is acting as the base station and the other arduino is simulation a robot in a maze:
The robot team was in charge of integrating everything we had been working on up to this point. This included detecting the 660 Hz signal, detecting IR hats, mapping out a maze using the right hand rule, and transmitting the maze information back to the base station.
Before we jumped into the code, we decided to clean up the circuitry on our robot. We had the IR detection on one breadboard and the servos, line sensors, and distance sensors on another breadboard. We decided to move everything to a single breadboard and add the microphone circuit. However, we realized that our microphone circuit had started behaving in unexpected ways and wasn’t giving a reliable signal anymore.
When probing our old microphone circuit with the oscilloscope, we found that there was some underlying frequency being produced somewhere in the circuit. At first we thought this was due to all the new components added to the robot since the last time the microphone was used, but this issue persisted even after the microphone circuit was separated entirely.
Our original circuit used two op amps for signal amplification. Through probing we found that our first amplification stage was working fine, so we decided to ditch the second stage. To make up for this, we increased the gain of the first stage.
The final circuit was split into 2 sections. The first section contained the IR and microphone circuits, which were connected to the atmega on pins A0 and A1. The second section contained the connections for the 2 servos, 2 line sensors, and 2 short range IR distance sensors. The atmega chip was connected to the main arduino through 2 pins: atmega pin 13 to arduino pin 7 and atmega pin 12 to arduino pin 8.
We continued to use the ATMega328P chip for performing FFT to sense other robot’s IR hats. However, for this lab we also had to detect a 660 Hz signal. We came up with a simple 2 pin communication protocol between the main arduino and the atmega. The first pin would be used as a mode select pin and would be an output on the arduino and an input on the atmega. The second pin was the data pin, and it was an output on the atmega and an input on the arduino. If the mode select pin was low, the atmega would try and detect the 660 Hz signal on its analog pin 1. If the signal was high, the atmega would detect the 6.08 KHz signal on its analog pin 0.
if (digitalRead(MODE_PIN) == LOW) {
mode = 0;
ADMUX = 0x41; // use adc1 for audio
} else {
mode = 1;
ADMUX = 0x40; // use adc0 for IR
}
For each of the 2 modes we had a threshold and a bin number that we would look at to determine if we had detected the signal.
if (mode == 0) {
bin = micBinNum;
thresh = micThresh;
} else {
bin = irBinNum;
thresh = irThresh;
}
If the atmega determined that it had seen the signal, it would set the data pin to high. Otherwise, the data pin was set to low. We used a small window over the primary bin to make the code more robust to small changes in frequency since every IR hat has a slightly different frequency
for(int i = bin - window/2; i <= bin + window/2; i++) {
sum += fft_log_out[i];
}
sum = sum / window;
if (sum > thresh) {
digitalWrite(DATA_PIN, HIGH);
} else {
digitalWrite(DATA_PIN, LOW);
}
We determined the bin numbers and thresholds using the software serial and matlab code from previous labs and burned the FFT code into the atmega.
After we had the hardware ready for the robot, we started integrating all the code we had written up until now. One issue that we noticed early on while testing the code was that the interrupts for the digital line sensors seemed to be interfering with the servo motors, which caused them to jitter when we sent them the stop command. We are not completely sure why but we think it might be that the servo timer interrupt can’t be triggered when the arduino is inside of an ISR. Because of this, we decided to move the line sensing code outside of the ISR and put it in the main code loop. This required us to translate our code into a state machine. The state machine executes a small chunk of code each loop depending on which state its in and may make state transitions. At the end of the loop, we read the line sensors.
The state machine currently has 8 states. The 1st state is when the robot is waiting for the 660 Hz tone to be detected. In this state, the robot reads the data pin connected to the atmega and switches to the 2nd state if it reads a high value.
The 2nd state is the main line following code. If one of the line sensors is over the white line, then the robot pivots in the opposite direction to compensate. If both of the line sensors are over the white line, then the robot determines that it has reached an intersection. When this happens, the robot moves forwards a little bit to center itself over the intersection and then determines what to do next. It first reads its front and right distance sensors for walls, updates its maze data, and sends the updated data to the base station. Then, if there was no wall to the right, it turns right by going to the 3rd state. If there was a wall to the right and in front, it turns left by going to the 6th state. Otherwise, it keeps going straight by staying in the 2nd state.
The turn left and turn right states are almost the same, just in opposite directions. The first turn left state turns the robot to the left until the left line sensor is not on a white line. This state is usually skipped but is necessary in case the robot wasn’t completely centered on the intersection. The second turn left state turns the robot to the left until the left line sensor is back on a line. Then it goes to the third turn left state which turns the robot ot the left until the left line sensor is not on a while line. The robot has now turned 90 degrees and can continue going forward by transitioning back to the 2nd state. There are 3 turn left and 3 turn right states, which brings the total up to 8 states. A diagram of the state machine is given below:
We also made our line sensing code a bit more robust to noise by adding a small ring buffer to store the previous N values. We set N to 4 to get a decent averager. As stated above, the line sensing code comes after the state transition code, so we have to make sure that each transition only relies on a signal line sensor reading. The line sensing code is given below:
pinMode(SENSOR_LEFT_PIN, OUTPUT);
pinMode(SENSOR_RIGHT_PIN, OUTPUT);
digitalWrite(SENSOR_LEFT_PIN, HIGH);
digitalWrite(SENSOR_RIGHT_PIN, HIGH);
delayMicroseconds(10);
pinMode(SENSOR_LEFT_PIN, INPUT);
pinMode(SENSOR_RIGHT_PIN, INPUT);
unsigned long startTime = micros();
int detectLeft = 0;
int detectRight = 0;
unsigned long t = micros();
unsigned long leftTime = SENSOR_LEFT_READING;
unsigned long rightTime = SENSOR_RIGHT_READING;
//time how long the input is HIGH, but quit after 3ms as nothing happens after that
while(t - startTime < 3000) {
if (digitalRead(SENSOR_LEFT_PIN) == LOW && detectLeft == 0) {
leftTime = t - startTime;
detectLeft = 1;
}
if (digitalRead(SENSOR_RIGHT_PIN) == LOW && detectRight == 0) {
rightTime = t - startTime;
detectRight = 1;
}
if (detectLeft == 1 && detectRight == 1){
break;
}
t = micros();
}
leftSenseBuf[leftSenseHead] = leftTime;
leftSenseHead = (leftSenseHead + 1) % SENSOR_AVE_SIZE;
rightSenseBuf[rightSenseHead] = rightTime;
rightSenseHead = (rightSenseHead + 1) % SENSOR_AVE_SIZE;
int sum = 0;
for(int i = 0; i < SENSOR_AVE_SIZE; i++) {
sum += leftSenseBuf[i];
}
SENSOR_LEFT_READING = sum / SENSOR_AVE_SIZE;
sum = 0;
for(int i = 0; i < SENSOR_AVE_SIZE; i++) {
sum += rightSenseBuf[i];
}
SENSOR_RIGHT_READING = sum / SENSOR_AVE_SIZE;
Below is a demo of the robot going through a simple maze. The robot successfully starts on a 660 Hz tone, it maps out the 2 by 3 maze and stops whever it detects and IR hat to prevent robot collisions. We still had to power the radio with the lab bench power supply because we didn’t have a voltage regulator on our board to provide the necessary current.
Here is a top down view of the maze we were running through:
And here is an image of the GUI after the robot had traversed the maze and transmitted the necessary data back to the base station:
You can find the full that we used for the lab here
Currently, we have moved all of our circuits on to one breadboard for compactness, but it would be best if all of our circuits were on a single PCB so that we can make our robot as compact and robust as possible by avoiding silly errors due to issues such as wires falling out. This would also allow us to stack the PCB directly on the arduino and prevent use from having to add a second level on our robot, thus further increasing its stability and robustness.
As for the maze-mapping capabilities of the robot, we would like to add a third short-range wall sensor to the side of our robot and a long-range wall sensor on the front of our robot in addition to the short-range sensor already present there, as mentioned in our milestone 2 report. This would allow us to more accurately map out the maze and do so with the least amount of errors. On a similar note, we plan on getting multiple readings from the ATMega microcontroller so that we can differentiate useful signals from the background noise picked up by the sensors (possibly through a debouncing state machine) in order to further increase the accuracy and robustness of our robot. Finally, we plan on adding a manual start button for redundancy in the case the our robot doesn’t start on the 660 Hz tone in actual competition.
In this lab, we were required to develop a FPGA module capable of detecting basic shapes from a camera input and pass the information to the Arduino. This would then be mounted to the robot to be able to detect treasures within the maze. In order for this to be done, we have to ensure that the Arduino, camera and FPGA are able to communicate among each other. The end goal is to be able to display a test image from the camera to the screen and be able to detect color, specifically be able to differentiate between red and blue images.
In the prelab we made a few simple calculations in order to determine what our camera settings should be and how we should set up the FPGA code. We first determined how much memory we had on the chip in terms of M9K blocks. This was directly from the data sheet: 594 kbits, which is around 74 Kbytes. Next, we figured out that the maximum color information we could output from the camera was RGB 565. After downsampling to RGB 332 (i.e. one byte per pixel), we need to figure out the maximum resolution that we can store the image at. Looking at the OV7670 datasheet, we noticed that we had a few options available to us: QCIF, QVGA, and CIF. CIF is 352 by 240, which is around 84 Kbytes, too big to fit in our memory. QVGA is 320 by 240, which is around 76 KBytes, just barely too big to fit into memory. QCIF is 176 by 144, which is 25 KBytes, so that will definitely fit into memory. Therefore, we need to set the output resolution to the QCIF format.
Setting up the camera required inspecting the data sheet to figure out which of the camera registers we needed to write in order to output the test bar image from the camera. After we defined all of the registers addresses, we had to reset all of the registers to their default values, which we did by writing the msb of COM7. Then, we enabled the external clock by setting CLKRC. We then set the output format to QCIF RGB by writing COM7. We then wrote to COM15 to set the RGB output mode to RGB565 at full range. When enabling the color bar test we enabled the DSP color bar on COM17 and the color bar on COM7. The final code looks something like this:
First we define the register addresses:
#define OV7670_I2C_ADDRESS 0x21
#define COM7 0x12
#define COM3 0x0C
#define CLKRC 0x11
#define COM17 0x42
#define MVFP 0x1E
#define COM15 0x40
#define COM9 0x14
Then we set the values as specified above:
OV7670_write_register(COM7, 0x80);
set_color_matrix();
OV7670_write_register(COM7, 0x0E);
OV7670_write_register(COM3, 0x08);
OV7670_write_register(CLKRC, 0xC0);
OV7670_write_register(COM17, 0x08);
OV7670_write_register(COM15, 0xD0);
OV7670_write_register(COM9, 0x1A);
OV7670_write_register(MVFP, 0x30);
The first thing we had to do to setup the FPGA was to generate all the clock signals that we would require to run our external devices (namely the camera and the VGA module). This was done by creating a PLL which would take the already provided 50 MHz clock signal as an input and use it to generate the 24 and 25 MHz clock signals. The 24 MHz clock signal was then assigned to a GPIO pin to allow it to be used by the camera. The 25 MHz clock signal was assigned to the clock for the VGA module and the read clock for the M9K module. The 50 MHz clock signal was assigned to the write clock for the M9K module. We used different clocks for the read and write ports of the memory module to make sure that we don’t read and write from the same address at the same time.
///////* INSTANTIATE YOUR PLL HERE *///////
team11PLL team11PLL_inst (
.inclk0 ( CLOCK_50 ),
.c0 ( CLOCK_24_PLL ),
.c1 ( CLOCK_25_PLL),
.c2 ( CLOCK_50_PLL )
);
///////* M9K Module *///////
Dual_Port_RAM_M9K mem(
.input_data(pixel_data_RGB332),
.w_addr(WRITE_ADDRESS),
.r_addr(READ_ADDRESS),
.w_en(W_EN),
.clk_W(CLOCK_50),
.clk_R(CLOCK_25_PLL),
.output_data(MEM_OUTPUT)
);
///////* VGA Module *///////
VGA_DRIVER driver (
.RESET(VGA_RESET),
.CLOCK(CLOCK_25_PLL),
.PIXEL_COLOR_IN(VGA_READ_MEM_EN ? MEM_OUTPUT : BLUE),
.PIXEL_X(VGA_PIXEL_X),
.PIXEL_Y(VGA_PIXEL_Y),
.PIXEL_COLOR_OUT({GPIO_0_D[9], GPIO_0_D[11],GPIO_0_D[13],GPIO_0_D[15],
GPIO_0_D[17],GPIO_0_D[19],GPIO_0_D[21],GPIO_0_D[23]}),
.H_SYNC_NEG(GPIO_0_D[7]),
.V_SYNC_NEG(VGA_VSYNC_NEG)
);
///////* Image Processor *///////
IMAGE_PROCESSOR proc(
.PIXEL_IN(MEM_OUTPUT),
.CLK(CLOCK_25_PLL),
.VGA_PIXEL_X(VGA_PIXEL_X),
.VGA_PIXEL_Y(VGA_PIXEL_Y),
.VGA_VSYNC_NEG(VGA_VSYNC_NEG),
.VSYNC(VSYNC),
.HREF(HREF),
.RESULT(RESULT)
);
Once we had the clocks all set up, we connected the VGA adapter to the specified GPIO pins on the FPGA (GPIO_0[5] -> GPIO_0[23]). The VGA adapter reads the pixel color information from the memory and outputs it as a VGA signal which can be displayed on the screen.
After setting up the FPGA and VGA Adapter, we had to make sure we had the ability to write to the M9K buffer so that we could store the current frame from the camera. Before integrating the camera, we manually wrote a test pattern into the M9K RAM to make sure that the VGA Driver was connected properly and that the correct image would be output to the monitor.
To write out test pattern to the buffer, we used two counter variables: one that incremented on every clock cycle, and one that incremented for every row. When the first counter reached our image width, we incremented the second counter and reset the first one back to zero. When the second counter reached our image height, it meant that we were done writing the entire test pattern into memory, so we reset both the counters.
For our test pattern, we simply wrote a red line to the M9K buffer every 10 lines using a for loop. Here is our code to do this:
///////* Buffer Writer *///////
integer i = 0;
integer j = 0;
always @(posedge CLOCK_24_PLL) begin
W_EN = 1;
X_ADDR = i;
Y_ADDR = j;
if (i < `SCREEN_WIDTH && (j % 10 == 0))
pixel_data_RGB332 = RED;
else
pixel_data_RGB332 = BLACK;
i = i+1;
if (i == `SCREEN_WIDTH) begin
i = 0;
j = j+1;
if (j == `SCREEN_HEIGHT) begin
j = 0;
end
end
end
Here is an image of our result:
// Insert image of test pattern here
We had the camera set up to record pixel data in the RGB 565 format, which meant that each individual pixel consists of 16 bits. However, our M9K RAM could only allocate 8 bits per pixel, so it was necessary to downsample the input coming in from the camera to the RGB 332 format. We also had to take into account the fact the camera could only output 8 bits of a pixel at a time, so to completely read the 16 bit pixel data, we would need to read two consecutive inputs from the camera. For the most accurate color representations after downsampling, we took the most significant bits from each color and concatenated them together to form our RGB 332 value which we then wrote to the M9K RAM.
Once we were happy with our downsampling, we had to figure out how and when to write to the buffer so that the image we see is properly synchronized with the camera. Initially we were only using the HREF signal output from the camera in order to tell when to change what line of the image we were writing. When we did this, we saw our image was skewed both horizontally and vertically. Also, our color bar would slowly shift to the side even though it should remain stationary. We were initially using the created 24 MHz clock on the FPGA to write to the buffer since that was the same clock we were sending to the camera, but when we compared the 24 MHz clock with the PCLOCK camera output using the oscilloscope, we realized that the PCLOCK was slightly slower than the 24 MHz it was receiving from the FPGA, so we decided to sync our buffer using this clock instead. Our image was now no longer shifting, but our colorbar was still vertically and horizontally skewed. After also incorporating the VSYNC camera output, which indicated when we were receiving a frame, with our memory buffer we finally got a steady image from the camera.
On every valid input from the camera, we flipped a bit we labeled k, which allowed us to tell whether the input we were receiving from the camera was the first or second half of the 16-bit pixel data. On receiving the second half of the pixel data, we downsampled it to 8 bits by concatenating together bits [7:5] and [2:0] of the first byte with bits [4:3] of the second byte, updated our x position and wrote the data to the buffer. When HREF went high, we paused as the data from the camera was not valid. We updated our y position and reset our x position during this time. When VSYNC went low, we also paused as the data from the camera was not valid. We reset both our x and y positions during this time so we would be ready to write the next frame to the buffer. The code for the downsampler is shown below:
///////* Downsampler *///////
reg[7:0] i = 0;
reg[7:0] j = 0;
reg k = 0;
reg h_flag = 0;
reg v_flag = 0;
reg [7:0] input_1 = 8'd0;
reg [7:0] input_2 = 8'd0;
wire [7:0] CAMERA_INPUT;
assign CAMERA_INPUT = {GPIO_1_D[23],GPIO_1_D[21],GPIO_1_D[19],GPIO_1_D[17],
GPIO_1_D[15],GPIO_1_D[13],GPIO_1_D[11],GPIO_1_D[9]};
wire P_CLOCK;
assign P_CLOCK = GPIO_1_D[7];
wire HREF;
assign HREF = GPIO_1_D[5];
wire VSYNC;
assign VSYNC = GPIO_1_D[3];
always @(posedge P_CLOCK) begin
if (VSYNC == 1'b1 && v_flag == 0) begin
i = 0;
j = 0;
v_flag = 1;
end
else if (VSYNC == 1'b0) begin
v_flag = 0;
if (HREF == 1'b0 && h_flag == 0) begin
k = 0;
i = 0;
j = j+1;
h_flag = 1;
end
else if (HREF == 1'b1) begin
h_flag = 0;
X_ADDR = i;
Y_ADDR = j;
if (k == 0) begin
input_1 = CAMERA_INPUT;
k = 1;
W_EN = 0;
end
else begin
input_2 = CAMERA_INPUT;
pixel_data_RGB332 = {input_2[7:5], input_2[2:0], input_1[4:3]};
i = i+1;
k = 0;
W_EN = 1;
end
end
end
end
At this point our images were very clear, but our colors were entirely wrong. Instead of reading color, it appeared that our camera was only measuring light intensity. After comparing our register values with the TAs, we realized that we had addressed our COM15 register incorrectly. After we fixed this we finally received a color bar that was close to what we were expecting.
We switched back to displaying our camera output and finally saw images that were properly colored. We messed with the automatic gain of the camera for a bit until we achieved a brightness level we were happy with.
To test color detection, we decided to use three internal LEDs on the FPGA that would light up to show whether red, blue or neither color was detected. The actual color detection was performed in the Image Processor module.
For every valid pixel, the image processor would first determine its RGB values, and then by comparing them against a certain threshold value, it would decide if the pixel was red, blue, or neither and increment the counter for the same. These threshold values were determined by trial and error and are dependent on the camera and how it is setup. This same procedure was repeated for every pixel in the frame.
At the end of a frame, we compared the counter values for each color to a threshold value which determined whether the frame was majority red, majority blue, or neither. These threshold values were once again determined by trial and error and are dependent on the camera and how it is setup. The register res was assigned a value based on what color was seen. If there were enough blue pixels to detect blue, the res would be set to 0’b111. Likewise, res would be set to 0’b110 if red was detected. If neither color was detected, res would be set to 0’b000. The value of res was assigned to the output RESULT. The counters were then reset for the next frame.
always @(posedge CLK) begin
if (HREF) begin
valR = PIXEL_IN[7:5];
valG = PIXEL_IN[4:2];
valB = {PIXEL_IN[1:0], 1'b0};
if (valB > valR && valB > valG) numB = numB + 16'd1;
else if (valR > valB && valR > valG) numR = numR + 16'd1;
else numN = numN + 16'd1;
end
if (VSYNC == 1'b1 && lastSync == 1'b0) begin //posedge VSYNC
if (numB > threshB) res = 3'b111;
else if (numR > threshR) res = 3'b110;
else res = 3'b000;
end
if (VSYNC == 1'b0 && lastSync == 1'b1) begin //negedge VSYNC
numB = 0;
numR = 0;
numN = 0;
end
lastSync = VSYNC;
end
In the Deo Nano module, we took RESULT from the image processor and turned on the respective LEDs to indicate what the module had found. If the image processor determined that the frame was majority red, we turned on LED 7 on the FPGA. If the image processor determined that the frame was majority blue, we turned on LED 6 on the FPGA. Finally, if the image processor determined that the frame neither majority red nor majority blue, we turned on LED 0 on the FPGA.
///////* Color Detection *///////
reg LED_7;
reg LED_6;
reg LED_0;
assign LED[7] = LED_7;
assign LED[6] = LED_6;
assign LED[0] = LED_0;
always @(*) begin
if (RESULT == 3'b111) begin //turn on LED 7
LED_7 = 1;
LED_6 = 1;
LED_0 = 1;
end
else if (RESULT == 3'b111) begin //turn on LED 6
LED_7 = 0;
LED_6 = 1;
LED_0 = 0;
end
else begin
LED_7 = 0;
LED_6 = 0;
LED_0 = 1;
end
end
For ease of use and because we have enough pins left, we will go with a parallel data transmission protocol. We will use 3 wires that are each either low or high to indicate one on the 7 states of the treasure detector and an additional invalid state. The 7 states are:
In this demo, we used the LEDs on the FPGA to distiguish different colors. One LED turns on if the camera doesn’t detect a red or blue shape, another LED turns on if the camera detects a red shape, and a third LED turns on if the camera detects a blue shape:
For Milestone 1, we had to take the robot we built in the previous lab and give it the ability to maneuver a grid using line sensors. We did this by first programming our robot to follow a straight line, and then modifying our code to allow for the detection of intersections to drive the robot in a figure eight pattern on the grid provided to us in the lab space.
After some trial and error, we decided to attach the line sensors to the front of our robot rather than the rear. We decided on using only 2 line sensors since we found that this was a sufficient number of line sensors to keep the robot relatively well oriented while also saving pins for later in the project. We first soldered male pin heads to the sensors so we could easily attach and remove wires between the sensors and the Arduino. Then we attached the sensors using a couple nuts and some extenders to the bottom of our base plate, making sure to position them pointing slightly inwards to keep them as close to the robot’s center of rotation as possible, as this makes the turning of the robot easier.
We chose to space out our sensors a little wider than the line so the robot has some flexibility and does not need to correct itself too often.
The first thing we had to do was determine whether our line sensor was a digital or analog line sensor. We initially assumed that all the sensors were analog, however when we tried displaying the output of the sensor on the Serial Monitor while holding it over the white grid lines and the darker background colors, the numbers did not line up with that we were expecting. So we looked at the documentation for our sensor and discovered that it was actually a digital sensor. After setting up and testing the sensors using the digital pins on our Arduino, the values made more sense and we determined that the threshold value for whites was anything below 400.
As we did in the previous lab, we initialized the servos and defined the variables necessary for them. In addition, we also had to initialize the sensors and this was done by assigning each one a digital pin along with a read and timer variable.
#define SENSOR_LEFT_PIN 2
#define SENSOR_RIGHT_PIN 3
// These variables are marked 'volatile' to inform the compiler that
// they can change at any time (as they are set by hardware interrupts).
volatile long SENSOR_LEFT_TIMER;
volatile long SENSOR_RIGHT_TIMER;
volatile int SENSOR_LEFT_READING;
volatile int SENSOR_RIGHT_READING;
Because the sensors we used are digital, a digital write to the pin is used as a signal to trigger a sensor reading. This was done by calling the digitalWrite() function before reading any input, which set the appropriate pin as output, pulled the pin up to HIGH , and then set it back to an input.
void setup_sensor(int pin, long *sensor_timer) {
*sensor_timer = micros();
pinMode(pin, OUTPUT);
digitalWrite(pin, HIGH);
pinMode(pin, INPUT);
}
In the setup method, each sensor was assigned an interrupt handler since that was how a reading was taken from the sensor. The ISR for each sensor calculated the sensor reading and then reset sensor’s timer by setting it equal to the current time. Each ISR was attached to a sensor using attachInterrupt(). Finally, the sensors and servos were initialized as described earlier.
void SENSOR_LEFT_ISR() {
// The sensor light reading is inversely proportional to the time taken
// for the pin to fall from high to low. Lower values mean lighter colors.
SENSOR_LEFT_READING = micros() - SENSOR_LEFT_TIMER;
// Reset the sensor for another reading
setup_sensor(SENSOR_LEFT_PIN, &SENSOR_LEFT_TIMER);
}
void setup() {
Serial.begin(9600);
// Tell the compiler which pin to associate with which ISR
attachInterrupt(digitalPinToInterrupt(SENSOR_LEFT_PIN), SENSOR_LEFT_ISR, LOW);
attachInterrupt(digitalPinToInterrupt(SENSOR_RIGHT_PIN), SENSOR_RIGHT_ISR, LOW);
// Setup the sensors
setup_sensor(SENSOR_LEFT_PIN, &SENSOR_LEFT_TIMER);
setup_sensor(SENSOR_RIGHT_PIN, &SENSOR_RIGHT_TIMER);
// Setup the servos
leftWheel.attach(LEFT_WHEEL_PIN);
rightWheel.attach(RIGHT_WHEEL_PIN);
delay(2000);
}
Our algorithm to keep the robot following a straight line was as follows. As long as neither of the line sensors detected a white line, both the wheels of the robot would just keep going forward as usual. However, if one of the line sensors did detect a white line, it would stop that wheel until the it did not detect the white line anymore, at which time it would continue moving forward. This works because by stopping the wheel on one side but keeping the wheel on the other side moving forward, the robot naturally pivots around its center of rotation to correct its path. We had initially had the line sensors set up on the rear of the robot, but as we discovered, this did not work very well as the robots center of rotation was closer to its rear. Once we reversed the direction of our robot, our algorithm worked like a charm. Here is how it was implemented:
void loop() {
//Continues moving forwards if both line sensors see a white line
if (SENSOR_LEFT_READING < 400 && SENSOR_RIGHT_READING < 400) {
leftWheel.write(FORWARD_LEFT);
rightWheel.write(FORWARD_RIGHT);
} else {
//Stops left wheel to correct the robot if left line sensor sees the white line
if (SENSOR_LEFT_READING < 400) leftWheel.write(STOP_POS);
else leftWheel.write(FORWARD)LEFT);
//Stops right wheel to correct the robot if right line sensor sees the white line
if (SENSOR_RIGHT_READING < 400) rightWheel.write(STOP_POS);
else rightWheel.write(FORWARD_RIGHT);
}
}
As you can see in the code above, if both of the line sensors see the white line at the same time, the robot has arrived at an intersection. For now, we want to ignore the intersections and thus the robot is programmed to just continue moving forward, but they become more important in the next section.
Demo:
Now that the robot was able to follow a straight line, we set about programming it to autonomously traverse a grid in a figure eight pattern.
The algorithm for this movement was generally the same as the one for following a straight line, with the only difference being the behaviour of the robot when it encountered an intersection. On reaching an intersection, the robot would first need to move forward a little bit to center itself over the intersection (since the line sensors are placed a little ahead of the wheels). The amount of time we needed the robot to move forward to center itself perfectly was determined by trial and error. We then needed the robot to turn left or right, which we did using the code we had written for a similar task in Lab 1. However, since we were now powering more stuff off the Arduino, each individual component received less power so turn took a lot longer than it had done previously.
void turnLeft() {
//Moves both wheels in opposite directions for an empirically determined amount of time
leftWheel.write(BACKWARD_LEFT);
rightWheel.write(FORWARD_RIGHT);
delay(LEFT_TIME);
}
void loop() {
if (SENSOR_LEFT_READING < 400 && SENSOR_RIGHT_READING < 400) {
leftWheel.write(FORWARD_LEFT);
rightWheel.write(FORWARD_RIGHT);
delay(400); //For the robot to center itself on the intersection
turnLeft();
} else {
...
Once we got the turns working well, all that remained was to string together the correct sequence of turns for the robot to perform a figure eight. This was done quite simply by creating a function which utilised a counter to chain together the turns required.
void fig8() {
if (counter%8 == 0) turnLeft();
if (counter%8 == 1) turnLeft();
if (counter%8 == 2) turnLeft();
if (counter%8 == 3) turnLeft();
if (counter%8 == 4) turnRight();
if (counter%8 == 5) turnRight();
if (counter%8 == 6) turnRight();
if (counter%8 == 7) turnRight();
}
void loop() {
if (SENSOR_LEFT_READING < 400 && SENSOR_RIGHT_READING < 400) {
leftWheel.write(FORWARD_LEFT);
rightWheel.write(FORWARD_RIGHT);
delay(400); //For the robot to center itself on the intersection
fig8();
counter++;
} else {
...
Demo:
Our algorithm works fine right now, but we have several ideas for how to improve the robot’s ability to traverse the grid.
First, we need to improve the speed of the robot. As we continue to add components to the robot, the robot will require more power. Up to this point we have been powering all the components directly from the Arduino, but we are likely reaching the limit to the amount of current the board can supply. Our plan for the next lab is to begin to design a power distribution system so that the board and all the components can be powered directly from the battery pack.
Second, we need to improve the robustness of our turning algorithm. As of now, we have our turning algorithm hard coded. When the robot is signaled to turn, it turns for a predetermined amount of time before following the line again. This presents some problems moving forward. We implemented the same turning sequence we used in lab 1, but the robot was no longer turning as much. We believe this is because with the additional hardware added in milestone 1, the Arduino could no longer supply as much current to the servos, causing them to run slower. This part of the turning can be fixed with a power distribution system as described above. But since the turns may not be entirely consistent through just using time, we may consider implementing the line sensors to tell the robot when to stop turning.
Third, we may explore the option of altering the structural design of the robot. Possible alterations include different chassis designs and different wheel sizes. These changes could allow us to place our sensors in a more ideal position and allow us to drive faster.
Fourth, we may spend some time testing various servos in the lab until we find 2 that are almost exactly the same speed. This will help the robot maintain a straight trajectory from the beginning so it won’t have to correct itself as much. If finding better servos in the lab proves to difficult, we may entertain the option of buying our own.
Fifth, we may consider implementing a moving average function for our sensors. During the lab, we observed that the sensors occasionally spiked up or down in value; if we told the robot to stop on a white line, it would occasionally jitter around. We didn’t observe this problem in the opposite scenario (driving straight and jittering to a stop), but this may be a problem that arises later. For this reason, this change is not the highest priority at the moment.
For Milestone 2, we had to use the IR distance sensors to add wall-detection capability to the robot and successfully circle inside a set of walls using right-hand following. While in the maze the robot will have to avoid other robots so we had to make sure our robot was capable of detecting other robots, and then determining what it needed to do to avoid the robot. Finally, we had to prepare a demonstration that shows the what the robot was “thinking” while navigating the maze and detecting walls or other robots using various LEDs.
For Milestone 2, we had to add another level on top of our current robot so that we could add the circuits from Lab 2 as well as an IR sensor on each side and the front of the robot. The IR sensors were able to detect the presence and distance of a wall in front of or to the side of the robot. These IR wall sensors enabled us to use the right-hand wall following method to navigate the maze.
In order to add another level onto the robot, we first had to disassemble a significant portion of our first level in order to rearrange components for the second level as well as to replace some of our flimsy structures and connections with more solid ones. Once this was done, we attached another base plate on top of our original one using 3D printed beams. The final design has the battery pack strapped to the bottom of the first level base plate, the arduino mounted on top of the first level base plate, and two small breadboards on the second level base plate to handle all the connections.
The IR wall sensors provided to us consisted of an IR-LED and an angle sensitive detector that simply used angle of reflection of the light from the object to determine its distance from the sensor. These wall sensors output analog values so they were quite simple to set up and read on the Arduino. The code for reading the wall sensors involves doing an analog read and checking the value against a threshold
if (analogRead(FRONT_IR_SENSOR) > WALL_THRESH) {
...
}
We then had to decide whether to use the short-range or long-range IR sensors for the purposes of wall following. Using some trial and error, we determined that for the distances that were away from the wall the short-range IR sensor reading was much more accurate and stable than the long-range one, so we decided to use only the short-range IR sensors for the purposes of this milestone.
Once we determined an appropriate threshold value for the wall sensor to successfully detect a wall, we could begin working on our Wall Detection algorithm.
For this milestone we decided to use a simple robot avoidance strategy. If our robot detects another robot, it simply stops and waits for the other robot to pass by. In order to maintain the accuracy and range of the IR Hat detection circuit, we decided that it would be best to continue using software FFT instead of trying to detect the frequency purely in hardware. However, we realized that the FFT library would likely interfere with the servo library and would slow down our robot’s navigation due to the extra processing time required to do FFT. Therefore, we decided to offload the FFT calculation onto a separate ATMega328P chip. We programmed this chip with the Arduino as ISP code and calibrated the FFT code to the ATMega328P’s internal clock using the MATLAB code from lab 2. We then connected a wire between the ATMega chip and our main arduino in order for the ATMega chip to let our main board know if it had detected another robot. We used the same code we used for Lab 2 except we changed the prescaler and the bin number based on the measurements we made from the ATMega.
#ifdef ATMEG
int irBinNum = 81;
int irThresh = 60;
#else
int irBinNum = 84;
int irThresh = 60;
#endif
void setup() {
#ifdef ATMEG
ADCSRA = 0xe5; // set the adc to free running mode
#else
ADCSRA = 0xe6; // set the adc to free running mode
...
}
void loop() {
...
// turn on LED and potentially send data back to main board
if (fft_log_out[irBinNum] > irThresh) {
digitalWrite(7, HIGH);
} else {
digitalWrite(7, LOW);
}
}
We started with our line following code from Milestone 2 and added the wall detection and IR Hat detection code to it in order to perform a right hand maze traversal. The wall detection code involved doing an analog read and checking the value against a threshold when the robot got to an intersection. If there was no wall to the right of the robot, it would turn right. Otherwise, if there was a wall in front of the robot and to the right, the robot would turn left.
if (analogRead(RIGHT_IR_SENSOR) < WALL_THRESH) {
// no wall to the right
turnRight();
} else if (analogRead(FRONT_IR_SENSOR) > WALL_THRESH) {
// wall to the right and wall in front
turnLeft();
}
Since we offloaded the IR detection code to the ATMega chip, only a digital read of the pin connected to the ATMega was required on each pass through our main loop and it was stopped when high:
if(digitalRead(FFT_PIN) == HIGH) {
leftWheel.write(STOP_POS);
rightWheel.write(STOP_POS);
}
One issue that we noticed when making the integrations was that whenever we wanted to stop our servos they would jitter around. We figured out that it had something to do with the ISR running for the line sensors. We ended up detaching the interrupts whenever we stopped and then reattached them when we started moving again and that seemed to calm the servos down. We are still unsure as to why the ISR were affecting the servos.
void detachInterrupts() {
detachInterrupt(digitalPinToInterrupt(SENSOR_LEFT_PIN));
detachInterrupt(digitalPinToInterrupt(SENSOR_RIGHT_PIN));
}
void attachInterrupts() {
// Tell the compiler which pin to associate with which ISR
attachInterrupt(digitalPinToInterrupt(SENSOR_LEFT_PIN), SENSOR_LEFT_ISR, LOW);
attachInterrupt(digitalPinToInterrupt(SENSOR_RIGHT_PIN), SENSOR_RIGHT_ISR, LOW);
// Setup the sensors
setup_sensor(SENSOR_LEFT_PIN, &SENSOR_LEFT_TIMER);
setup_sensor(SENSOR_RIGHT_PIN, &SENSOR_RIGHT_TIMER);
}
...
void loop() {
...
leftWheel.write(STOP_POS);
rightWheel.write(STOP_POS);
detachInterrupts();
delay(1000);
attachInterrupts();
...
}
After all the integrations, our system is laid out like this:
Here is an image of the final circuit for milestone 2:
One of the improvements we discussed for our final robot was to include multiple multi-directional IR sensors instead of just the singular one we have currently. This would allow us to not only detect another robot, but also detect which direction it was coming from, thus allowing our robot to avoid other robots even if the robot detection functions of the other robots are not working as it should.
On a similar note, we considered augmenting the short-range IR sensor currently on the front of our robot with another long-range IR sensor. This would enable us to detect walls not only right in front of our robot but also those a few squares away, thus allowing us to implement a less reactive and more predictive maze exploration algorithm for our final robot.
The maze exploration algorithm that we used was DFS. However, the DFS algorithm was not explicitly coded into our robot. Instead, at each intersection the robot determines what its next action should be: go forward, turn left, or turn right. In order to determine this, the robot performs a BFS search on the currently explored parts of the maze and finds the closest unexplored frontier node. It then performs one of the 3 above actions in order to get it closer to that node. Since each of the edge weights in the BFS algorithm have a weight of 1, this is equivalent to performing Dijkstra’s in order find the closest unexplored node. By iteratively performing the “find the closest unexplored node and move towards it”, our robot performs a DFS mapping of the maze.
The implementation of this algorithm requires 2 data structures. The first is an RxC array representing each of the intersections in the maze which tell us where the walls are and if we have explored the intersection. We actually already have this data structure from Lab 3. The additional data structure that we need to create is a BFS queue. At this point we make a distinction between explored and visited nodes. An explored node is a node that our robot has already physically been to. A visited node is a node that the BFS algorithm has visited in its search to find the closest unexplored node. If a node is put on the BFS queue, it must have been visited. A visited node cannot be put on the BFS queue. This means that a node will only be on the BFS queue once. We utilized this fact to build an implicit BFS queue in another RxC array. This RxC array was called the BFS matrix and it holds pointers to the next row and column to explore in the BFS algorithm. It also hold the visited information, the current direction going into that node, and the initial action that we took to get to that node.
Here is a description of the code we wrote.
This code sets up the initial BFS matrix state:
int curX = robotX;
int curY = robotY;
unsigned char delayValue = 0;
// reset visited
for (int i = 0; i < NUM_ROWS; i++) {
for (int j = 0; j < NUM_ROWS; j++) {
maze_exp_info[i][j].v = 0;
maze_exp_info[i][j].nRC = 0xf;
maze_exp_info[i][j].dir = robotDir;
maze_exp_info[i][j].iD = 0x3;
}
}
We pop a node by setting the current x and y positions to the next pointer in the current BFS buffer cell:
unsigned char nRC = maze_exp_info[curY][curX].nRC;
delayValue = bitRead(nRC, 7);
nRC = nRC & (~0x80);
// if next equals cur, then we know we are done (otherwise infinite loop)
if (numIters > 4*NUM_ROWS*NUM_COLS) {
return 0x3;
}
curY = nRC / NUM_ROWS;
curX = nRC % NUM_ROWS;
We insert a delay value for turning left and turning right to account for the fact that going to the left or right cell takes 2 steps:
if (delayValue == 1) {
// remove node and reinsert at end of queue
maze_exp_info[prevY][prevX].nRC = curY*NUM_ROWS + curX;
prevY = curY;
prevX = curX;
// get the next element
unsigned char nRC = maze_exp_info[curY][curX].nRC;
delayValue = bitRead(nRC, 7);
nRC = nRC & (~0x80);
curY = nRC / NUM_ROWS;
curX = nRC % NUM_ROWS;
continue;
}
If we have found an unexplored cell, this is where we want to go:
if (!explored(curX, curY)) {
return maze_exp_info[curY][curX].iD;
}
We then add the forward, left, and right cell to the implicit BFS queue:
int curDir = maze_exp_info[curY][curX].dir;
// check if can go forward
if (canAdvance(curX, curY, curDir)) {
queueAdd(curX, curY, &prevX, &prevY, curDir, 0x0);
}
int leftDir = turnDirnLeft(curDir);
// check if can go left
if (canAdvance(curX, curY, leftDir)) {
queueAdd(curX, curY, &prevX, &prevY, leftDir, 0x1);
}
int rightDir = turnDirnRight(curDir);
// check if can go right
if (canAdvance(curX, curY, rightDir)) {
queueAdd(curX, curY, &prevX, &prevY, rightDir, 0x2);
}
We use this helper function to determine if a node can be added to the queue:
void queueAdd(int x, int y, int* fromX, int* fromY, int advDirn, int moved) {
int advX = x;
int advY = y;
adv(&advX, &advY, advDirn);
// visit if not visited
if (maze_exp_info[advY][advX].v == 0x0) {
// setup the initial direction for next node to
// the same as however we got here
if (maze_exp_info[y][x].iD == 0x3) {
maze_exp_info[advY][advX].iD = moved;
} else {
maze_exp_info[advY][advX].iD = maze_exp_info[y][x].iD;
}
// set incoming dirn to advance dirn
maze_exp_info[advY][advX].dir = advDirn;
// mark the node we will visit as visited
maze_exp_info[advY][advX].v = 0x1;
// set the previous element on the queue to point
// to this enqueued node
maze_exp_info[*fromY][*fromX].nRC = advY*NUM_ROWS + advX;
// set top bit to 1 if you want to delay it by 1
if (moved != 0x0) {
maze_exp_info[*fromY][*fromX].nRC |= 0x80;
}
// set next round of pointer chasing
*fromY = advY;
*fromX = advX;
}
}
After we had tested the maze exploration code on several simulated mazes, we had to integrate it with the code from Lab 3, which did maze exploration using the right hand rule. We refactored our code to call a function that figures out what the next action should be (forward, right, left) at each intersection. We then inserted our DFS algorithm into the function. We then added a state to our state machine to determine the next position and transition to the appropriate state:
leftWheel.write(STOP_POS);
rightWheel.write(STOP_POS);
update_wall_sensor_values();
// get 3 point wall information
int isFrontWall = is_wall_in_front();
int isRightWall = is_wall_on_right();
int isLeftWall = is_wall_on_left();
// set the maze for the current position
set_maze(isFrontWall, isLeftWall, isRightWall);
// send the current maze cell data back to the base station
sendData();
// get the next action to do and advance the current maze state
// by that action
int nextAction = get_next_action();
if (nextAction == 0x0) {
state = FORWARD;
} else if (nextAction == 0x1) {
state = LEFT_1;
} else {
state = RIGHT_1;
}
In the demo we set up a 5x4 maze that had several challenging features for our algorithm. The first was a dead end that the robot had to backtrack out of. The second was that we had a section of the maze that the robot would miss since it always prefers to go forwards. This means it has to figure out a path to get back to the squares that it didn’t explore the first time. Our robot was able to solve both challenges as you can see from the demo below:
Here is the final image from the GUI, and you can see that it perfectly maps out the entire maze:
For this milestone, we had to do complete shape and color detection and communicate the data back to the arduino. The general overview of our FPGA code is to first detect edges, then record the color of that edge, and if we have enough edges that are of a certain color and in a certain direction, then we are able to determine a shape and color.
Our shape detection starts with edge detection. In order to perform edge detection, we first convert the pixel values to grayscale by summing the red, green, and blue components into a pixel intensity value:
grey = input_2[7:3] + {input_2[2:0], input_1[7:6]} + input_1[4:0];
Then, we treat the pixel values as a 1-D signal and store the previous 4 grayscale values:
grey_pppp = grey_ppp;
grey_ppp = grey_pp;
grey_pp = grey_p;
grey_p = grey;
In order to remove noise, we set the current grayscale value to be the average of the current and previous grayscale values:
grey_blur = grey + grey_p;
grey_blur = grey_blur >> 1;
grey = grey_blur[7:0];
Note that this is equivalent to performing a convolution on the grayscale signal with the signal [1/2 1/2]. Next, we perform edge detection by convolving the pixel signal with an edge detection signal [-1 -1 0 1 1]. Note that this convolution allows us to detect non-horizontal right edges.
tmp_1 = grey - grey_pppp;
tmp_2 = grey_p - grey_ppp;
grey_eg = tmp_1 + tmp_2
Finally, every 12 lines we determine the x coordinate of the brightest edge pixel and compare it with the x coordinate of the brightest pixel from 12 lines above it. If the difference is positive, we record a negative edge. If the difference is negative, we record a positive edge. If the difference is around 0, we record a straight edge.
if (Y_ADDR % 12 == 0 && X_ADDR >= 30 && X_ADDR < 150) begin
if (grey_eg[7:3] > curBrightestVal) begin
curBrightestVal = grey_eg[7:3];
curXB = X_ADDR;
end
end
if (Y_ADDR % 12 == 0 && X_ADDR == 150 ) begin
if (curBrightestVal >= 1) begin
xDiff = curXB - prevXB;
if (xDiff > 3 && xDiff < 16) begin
numNeg = numNeg + 1;
end
else if (xDiff < -3 && xDiff > -16) begin
numPos = numPos + 1;
end
else if (xDiff <= 3 && xDiff >= -3) begin
numStraight = numStraight + 1;
end
end
prevXB = curXB;
curBrightestVal = 0;
end
Later, we use the number of straight, positive, and negative edges to determine the shape.
In order to determine what color a shape is, we look at the color to the left of the brightest edge that we detect. This allows our edge detection to be more robust, since we avoid the case where we see many straight edges but the color is white, like at a break in the maze wall. The code first computes the L1 distance between the current pixel and the hard coded values for red and blue:
redRDif = 5'b11111 - pixel_data_RGB565[15:11];
redGDif = 5'b01000 - pixel_data_RGB565[10:6];
redBDif = 5'b01000 - pixel_data_RGB565[4:0];
blueRDif = 5'b00001 - pixel_data_RGB565[15:11];
blueGDif = 5'b00001 - pixel_data_RGB565[10:6];
blueBDif = 5'b00101 - pixel_data_RGB565[4:0];
Next, once it comes across the pixel directly below and 10 to the left of the brightest edge pixel, it checks to see if the current pixel is close enough to red or blue to be considered red or blue:
if (Y_ADDR % 12 == 1 && X_ADDR == prevXB - 20) begin
if (redRDif < `R_THRESH && redRDif > -`R_THRESH &&
redGDif < `R_THRESH && redGDif > -`R_THRESH &&
redBDif < `R_THRESH && redBDif > -`R_THRESH ) begin
pixel_data_RGB565 = 16'b1111100000000000;
numRed = numRed + 1;
end
else if (blueRDif < `B_THRESH && blueRDif > -`B_THRESH &&
blueGDif < `B_THRESH && blueGDif > -`B_THRESH &&
blueBDif < `B_THRESH && blueBDif > -`B_THRESH ) begin
pixel_data_RGB565 = 16'b0000000000011111;
numBlue = numBlue + 1;
end
end
It then sets 3 wires to the appropriate values to encode the 7 possible options according to the following scheme, which is modified from the one envisioned in lab 4.
// x00: Nothing
// 001: Blue triangle
// 010: Blue square
// 011: Blue diamond
// 101: Red triangle
// 110: Red square
// 111: Red diamond
if (numRed >= 6) begin
RES_2 = 1;
LED_2 = 1;
end
else if (numBlue >= 6) begin
RES_2 = 0;
LED_2 = 0;
end
if ( numNeg >= 6 ) begin
RES_1 = 0;
RES_0 = 1;
LED_1 = 0;
LED_0 = 1;
end
else if ( numStraight >= 6 ) begin
RES_1 = 1;
RES_0 = 0;
LED_1 = 1;
LED_0 = 0;
end
else if ( numNeg >= 3 && numPos >= 3 ) begin
RES_1 = 1;
RES_0 = 1;
LED_1 = 1;
LED_0 = 1;
end
else begin
RES_1 = 0;
RES_0 = 0;
LED_1 = 0;
LED_0 = 0;
end
The arduino periodically polls these signals and prints out the value to the serial monitor:
int w0 = digitalRead(4);
int w1 = digitalRead(5);
int w2 = digitalRead(6);
if (!w2 && !w1 && w0) {
Serial.println("Blue triangle");
} else if (!w2 && w1 && !w0) {
Serial.println("Blue square");
} else if (!w2 && w1 && w0) {
Serial.println("Blue diamond");
} else if (w2 && !w1 && w0) {
Serial.println("Red triangle");
} else if (w2 && w1 && !w0) {
Serial.println("Red square");
} else if (w2 && w1 && w0) {
Serial.println("Red diamond");
} else if (!w1 && !w0) {
Serial.println("Nothing");
}
Here is a demo of the full code working as expected:
The final mechanical design of our robot was similar to that of previous iterations. The biggest difference was that we got rid of the second base that was on top because we transitioned from breadboards to stacking perfboards. We also had to move the location of the battery because since we added an additional line sensor between the existing ones, it became difficult to access the battery underneath. To fix this, we added velcro to the base of the robot and the stands that used to hold the second base so that the battery could be attached there and be easily accessed.
Another major change that we made was that we increased the height of the line sensors and added a third line sensor in between the original two. We initially increased the height because we realized that the line sensors were too close to the ground and would sometimes get caught on the mat. However, we later realized that increasing the height of the line sensors gave better readings to the arduino and allowed us to do line sensing much better. The third line sensor was also added to help us increase the accuracy of our line detection.
The rest of the robot consists of the servos, the wall sensors on its perimeter and our functioning IR Hat. There are three wall sensors, two on the sides and one on the front of the robot, that were attached to the base plate using mounts. Our IR Hat was placed with the constraint that it has to be around 5.5 inches off the ground. To meet this requirement, we bolted it on top of a stand that used to support the second base plate.
Our robot uses a second ATmega chip (other than the one on the Arduino) to perform all of our FFT calculations and camera setup. We chose to do this so that our Arduino did not have to waste time doing the FFT calculations for sound and IR detection and could instead worry about maze exploration and primary sensor control. This design also allowed us to minimize the number of pins we used on our Arduino, especially as we no longer needed two analog pins for camera setup. This added the extra complexity of needing to remove the ATmega chip and reprogram it if we wanted to change any of the FFT code. Luckily, once we chose correct thresholds this no longer needed to be done.
We had put off cleaning up the wiring of our robot for a while, but decided it needed to be done before the final competition. We decided on a stacking board design for our wiring as it would allow us to easily assemble and disassemble our robot while still giving us access to all the arduino pins in case we needed to add additional hardware. Originally, we wanted to mill two PCBs: one for our microphone and IR components, and one for our servos and remaining sensors. We developed schematics and layouts for both boards using Eagle.
Here is the layout for our microphone and IR circuit:
And here is the layout for our servos and sensors circuit:
Several important considerations were made when designing these boards. First, with our microphone and IR board, we needed our IR sensor to be positioned at the front of the board (lining up with the front of the robot) and our microphone needed to be exposed so as to get the best sound reception. Second—with our servos and sensors board—we needed the positions of the headers on the board to line up with where their respective sensor or servo would be located on the robot. We decided that this board needed to be on top as to give us easy access to all the header pins; a board on top of this would get in the way. Also, this board needed to be shorter than the bottom board so that the microphone could remain uncovered.
With the top board we decided to add several additional features, including an override button, on-off switch, and power header. The override button was there in case our robot was not able to successfully start from the tone. Since the start signal was being sent from our second ATmega chip to a digital pin on the Arduino, we did not need any additional pins to implement this button. Instead, the button just pulled the signal to the existing digital pin high when pressed. The on-off switch was added to allow us to control power to the robot without having to unplug it. The power header was added to allow us to power the robot directly from the battery pack instead of from the arduino. All the components on the top board were connected to this power header and to the VIN pin of the arduino. The components on the bottom board were left powered by the arduino as there were only a few of them and none of them drew a lot of current.
Unfortunately, we had technical difficulties getting our first board to mill properly. A PCB also would not have given us any flexibility if we decided to move any components around or add additional components (flexibility we would end up needing). Instead of PCBs, we decided to use perf boards. We wanted to keep the stacking design of the boards, so one side of the bottom board needed to be bent slightly as to line up correctly with the Arduino headers. As to not waste the layouts produced for the PCBs, we decided to lay out our perfboards in almost the same manner.
Here is an image of our final perf boards circuit:
We also added an additional line sensor to our robot in an effort to make our line following more robust. Instead of using defined thresholds for white and black, our system would calibrate depending on the reading of the middle line sensor; this sensor was to always be on the line while the outside two off the line. This allowed our robot to function properly in many different lighting conditions, as the lighting on the day of the competition could not be predicted.
The final major hardware modification we made to our robot dealt with the position of our IR sensor. When testing our robot collision avoidance we ran into two major problems: our IR sensor would occasionally detect our own IR hat and also could not detect other robots until they were too close. To mediate both these problems, we decided to mount our IR sensor pointing down roughly 1.5 to 2 inches above our IR hat. We found through testing that most of the IR light emitted from the hats went upwards, no matter the placement of the hat. We believe this is because since the LEDs are surface mounted to the top of the hat, the light they emit is mostly blocked from going downward by the physical hat itself. If the hat was mounted upside down—as some teams did—then the light would go downwards, reflect off the ground, and then go back upwards. This positioning of our IR sensor gave us a consistent 5-7 inches of clearance before detecting another robot. This range could also be changed depending on the angle we bent the IR sensor at. This also coincidentally solved the problem of our robot detecting its own IR hat. It used to detect its own hat near walls, as the IR light would reflect off the walls and back into our own sensor. When we repositioned our sensor to above the IR hat it was now higher than the walls of the maze, so light could not be reflected back into the sensor. We also tested to make sure we could not detect a robot on the opposite side of a wall from us. Our robot detection was working correctly.
Here is an image of our final robot before the competition:
We had several pieces of code split over a few microcontrollers: FPGA code that did treasure detection, base station code that received messages and updated the GUI, robot code that ran the maze exploration and primary sensor code, and the atmega code which ran the FFT code and set up the camera.
Although the basic ideas behind our color and shape detection didn’t change much from what we had in our FPGA code in milestone 4, we spent a lot of time post milestone 4 adjusting the threshold values for the color and shape detection to get the most consistent results possible.
Before that however, we would like to mention some major changes we had made to the provided FPGA code for milestone 4, but had neglected to touch upon in our writeup for the same. One of these changes was to increase the width of the pixel data stored in the FPGA memory from 8-bits per pixel all the way to 32-bits per pixel. This allowed us to store both the 16-bit RGB565 color data as well as the 16-bit greyscale data for each pixel, which in turn allowed us to increase the accuracy of both our color and shape detections. At first we were afraid that the FPGA board would not have enough memory to store this much data, but after testing we discovered that we were able to store the 32-bit pixel data just fine. We did realize, however, that the VGA driver we were using was only able to output the pixel data to the monitor in the RGB332 format, so we moved the downsampling code we had written in lab 4 to the VGA_DRIVER file and downsampled both the color and greyscale pixel data, before choosing one of them to be output to the monitor.
assign PIXEL_COLOR_EDGE = {PIXEL_COLOR_IN[15:13], PIXEL_COLOR_IN[10:8], PIXEL_COLOR_IN[4:3]};
assign PIXEL_COLOR_DSAMP = {PIXEL_COLOR_IN[31:29], PIXEL_COLOR_IN[26:24], PIXEL_COLOR_IN[20:19]};
assign PIXEL_COLOR_OUT = (pixel_count<(`VISIBLE_SCREEN_WIDTH) ? (PIXEL_COLOR_DSAMP) : (8'b00000000) ;
Another major change we had made in milestone 4 was to move all the functionality of the IMAGE_PROCESSOR module into the main DE0_NANO file. This was because we found that, although the color and shape detection was working fine in the IMAGE_PROCESSOR module, we were having a lot of difficulty transferring over the correct result into the DE0_NANO file to be sent back to the Arduino. Another problem we were facing while using the IMAGE_PROCESSOR file was that sometimes the program would not compile properly due to insufficient memory on the FPGA, but only sometimes. Moving all of the image processing code from the IMAGE_PROCESSOR file to the main DE0_NANO file fixed both of these issues.
// get camera input
input_1 = CAMERA_INPUT;
...
input_2 = CAMERA_INPUT;
pixel_data_RGB565 = {input_2, input_1};
redRDif = 5'b11111 - pixel_data_RGB565[15:11];
redGDif = 5'b01000 - pixel_data_RGB565[10:6];
redBDif = 5'b01000 - pixel_data_RGB565[4:0];
blueRDif = 5'b00001 - pixel_data_RGB565[15:11];
blueGDif = 5'b00001 - pixel_data_RGB565[10:6];
blueBDif = 5'b00101 - pixel_data_RGB565[4:0];
grey = input_2[7:3] + {input_2[2:0], input_1[7:6]} + input_1[4:0];
// edge detection
if (Y_ADDR % 12 == 0 && X_ADDR >= 30 && X_ADDR < 150) begin
if (grey_eg[7:3] > curBrightestVal) begin
curBrightestVal = grey_eg[7:3];
pixel_data_RGB565 = 16'b1111111111111111;
curXB = X_ADDR;
end
end
// color detection
if (Y_ADDR % 12 == 1 && X_ADDR == prevXB - 20) begin
if (redRDif < `C_THRESH && redRDif > -`C_THRESH &&
redGDif < `C_THRESH && redGDif > -`C_THRESH &&
redBDif < `C_THRESH && redBDif > -`C_THRESH )
numRed = numRed + 1;
else if (blueRDif < `B_THRESH && blueRDif > -`B_THRESH &&
blueGDif < `B_THRESH && blueGDif > -`B_THRESH &&
blueBDif < `B_THRESH && blueBDif > -`B_THRESH )
numBlue = numBlue + 1;
end
// detecting as red triangle
if (VSYNC == 1'b1 && v_flag == 0) begin
RES_1 = 0;
RES_0 = 0;
if (numRed >= 6) begin
RES_2 = 1;
if ( numNeg >= 6 ) begin
RES_1 = 1;
RES_0 = 0;
end
...
end
...
end
After the changes mentioned above, and once we were happy with all of our threshold values, we attached the camera to the robot and tested out whether we would be able to successfully detect the treasures in an actual maze. Unfortunately, our testing showed that although our camera was definitely able to detect the treasures in most of the cases, it was also giving us a lot of false positives, and the extremely strict point criteria for detecting false treasures or mistaking one or more of the aspects of the treasures made it so that doing treasure detection in the competition was not worth the risk, so we scrapped the camera and FPGA from our final robot.
The first section of the Atmega code was for setting up the camera. The code for the camera was adjusted from milestone 4 by writing to more registers to improve the image from the camera and by changing the pins used to communicate with the FPGA. Now, the arduino uses three analog pins to read values from the FPGA rather than the digital pins. For the registers, we first enabled the fast AEC/AGC algorithm by writing 0x88 to address 0x13 (COM8) and to disable to auto white balance. This is so we can set the gain for blue, red and green by writing 0x80, 0x40 and 0x4E to addresses 0x01, 0x02 and 0x6A respectively. These values were chosen arbitrarily but they will matter based on the environment you are in. The exposure time for automatic exposure control was set by writing 0x40 to 0x07, which was also arbitrarily chosen and will also depend on the environment, giving a window for the camera to adjust properly. However, 0x40 only represents the first 6 bits of AEC. The middle 8 bits are determined by register 0x10 and it also helps determine the exposure time. In our case, 0x24 was written to 0x10.
The gain setting was adjusted by writing to addresses 0x00 and 0x03. This was done because we disabled the AGC when writing to COM8. GAIN (0x00) represents the least significant bits of the gain and VREF (0x03) represents the most significant bits of the gain. These registers were set to 0x0F and 0x00. Along with the gain, we could adjust the noise so that we could reduce the SNR and enhance our image quality. Register 0x4C sets the denoise threshold, which we set to 0xFF.
Finally, the brightness and contrast of the camera could be adjusted with registers 0x55 and 0x56 respectively. The values for them were 0x60 and 0x80, but these depend on your surroundings. For all of the registers mentioned above, they were set to values near their default ones. We first read the values of all of the registers and then tweaked with the register values until we got an image that worked well. We used this document to get a better idea for what each register did.
// reset registers
OV7670_write_register(COM7, 0x80);
set_color_matrix();
// QCIF (176x156) RGB output format
OV7670_write_register(COM7, 0x0C);
// enable scaling for QCIF
OV7670_write_register(COM3, 0x08);
// Use external clock
OV7670_write_register(CLKRC, 0xC0);
// set to RGB 565 mode
OV7670_write_register(COM15, 0xD0);
// adjust the gain ceiling
OV7670_write_register(COM9, 0x6A);
// mirror the image
OV7670_write_register(MVFP, 0x30);
// freeze the automatic settings
OV7670_write_register(COM8, 0x88);
// set white balance blue
OV7670_write_register(WB_B, 0x80);
// set white balance red
OV7670_write_register(WB_R, 0x40);
// set white balance green
OV7670_write_register(WB_G, 0x4E);
// set exposure upper bits
OV7670_write_register(AEC, 0x40);
// set exposure lower bits
OV7670_write_register(AEC_M, 0x24);
// set gain lower bits
OV7670_write_register(G_LSB, 0x0F);
// set gain upper bits
OV7670_write_register(G_MSB, 0x00);
// set denoise threshold
OV7670_write_register(0x4C, 0xFF);
// set brightness
OV7670_write_register(0x55, 0x60);
// set contrast
OV7670_write_register(0x56, 0x80);
The second part of the atmega code was the FFT code. This code remained mostly unchanged from milestone 3, although we did adjust the threshold values slightly. We realized that our IR hat was too sensitive and was detecting our own IR hat or reflections of our IR hat off the wall. We were also able to mitigate this problem by changing the positioning of the IR LED, as described above.
The final code that we ran on was split into 4 main parts to keep everything the codebase cleaner. The first part was the sensor code. The sensor code contained the code that we used to read the line sensors and distance sensors. We then applied some averaging and other algorithms to the data to make the data more useable. For the distance sensors, we made a function that took a few readings of each sensor and then averaged them. This helped smooth out any noise from the sensors:
void update_wall_sensor(unsigned long* reading, int sensor_num) {
*reading = 0;
for(int i = 0; i < NUM_WALL_READINGS; i++) {
*reading += analogRead(sensor_num);
}
*reading = *reading / NUM_WALL_READINGS;
}
The line sensing code was more complicated. We would first calculate the value that each of the digital line sensors was reading. Then, if the reading was valid we would update a ring buffer of previous values. This was to enable efficient averaging of the data. We then did something interesting: we used the center line sensor reading to calibrate the other 2 line sensors. Because the center line sensor is almost always over the center line and because all of the line sensors are approximately the same, we would be able to calibrate our thresholds for what counted as a line based on the center line sensor. We needed to be careful though: there are times when the middle line sensor is not on a white line. These are mostly when it is turning left or right. So we added support for disabling updates to the center line reading in those circumstances. We also set a threshold for the maximum value that the center line should read, so that even if it did go off the center line when not turning left or right, it wouldn’t mess up the thresholds.
leftSenseBuf[leftSenseHead] = leftTime;
leftSenseHead = (leftSenseHead + 1) % SENSOR_AVE_SIZE;
if (updateCenterReading && centerTime < MAX_CENTER) {
centerSenseBuf[centerSenseHead] = centerTime;
centerSenseHead = (centerSenseHead + 1) % SENSOR_AVE_SIZE;
}
int sum = 0;
for(int i = 0; i < SENSOR_AVE_SIZE; i++) {
sum += leftSenseBuf[i];
}
SENSOR_LEFT_READING = sum / SENSOR_AVE_SIZE;
sum = 0;
for(int i = 0; i < SENSOR_AVE_SIZE; i++) {
sum += centerSenseBuf[i];
}
SENSOR_CENTER_READING = sum / SENSOR_AVE_SIZE;
// Update the threshold value
LEFT_SENSOR_THRESH = SENSOR_CENTER_READING+LEFT_DIFF;
The next piece of code was our state machine. We added a few states and variables for error detection and recovery to this code from milestone 3. We realized that we could time how long it takes to traverse 2 intersections and set a ceiling on the time it should take to traverse one intersection to approximately two-thirds of that value. Then, we can use the Arduino’s internal timer to periodically check how long its been since we have been at an intersection. If that time goes above some threshold, then our line detection must have messed up, and we stop and update our state, just like we would if we had detected an intersection.
if (state == FORWARD) {
if (millis() - state_start_time > FORWARD_TIME_THRESH) {
// Must have missed intersection. Get next action
state = GET_NEXT_ACTION;
} else {
// perform normal line following code
...
}
...
}
During tests, we observed that the robot sometimes detected intersections where there were none. In order to make our robot more robust to small changes in tape color, we added a new state that the robot goes into when it thinks it detects and intersection. First it waits for a few cycles until it gets totally fresh sensors readings. Then, it reevaluates if it is still at an intersection and if it is, then it stops and figures out what to do next. This robustness code was difficult to test as the line sensor errors where difficult to reproduce. However, the code did seem to work in a toy environment with only a straight line.
if (state == SAW_INTER) {
// wait for sensors to settle
if (forwardWait < 4) {
forwardWait++;
} else {
if (SENSOR_LEFT_READING < LEFT_SENSOR_THRESH && SENSOR_RIGHT_READING < RIGHT_SENSOR_THRESH) {
// go forward and center over line
leftWheel.write(FORWARD_LEFT);
rightWheel.write(FORWARD_RIGHT);
delay(150);
state = GET_NEXT_ACTION;
} else {
// false intersection
state = FORWARD;
}
}
}
Finally, we needed to add code to handle making U-turns due to detection of another robot by the IR sensor. When we detect an IR hat while going forward we transition into the turn left state. Once we finish the left turn there are 2 possibilities: either we turned 90 degrees or 180 degrees. In order to tell the difference we again measured the average time to turn 135 degrees and coded that number in as a threshold. Then, once we have finished attempting the U-turn, we check how long the turn took and make adjustments to the maze exploration algorithm to fix our position.
if(digitalRead(FFT_DATA_PIN) == HIGH) {
state = LEFT_1;
attempt_u_turn = 1;
}
...
if (attempt_u_turn) {
// reset next value
maze[robotY][robotX] = 0;
// move my position back
robotDir = (robotDir + 2) % 4;
adv(&robotX, &robotY, robotDir);
if (millis() - state_start_time > 1100) {
// made u turn
state = FORWARD;
attempt_u_turn = 0;
} else {
// only turned 90 degrees to the left, finish 180 degree turn
state = LEFT_1;
attempt_u_turn = 0;
}
}
The maze exploration algorithm was mostly unchanged from milestone 3. The primary innovation that we made in our maze exploration code was the low number of bytes it required. The data structures that contained the maze information and the maze traversal algorithm took up 3 bytes per cell, giving 243 bytes for the whole algorithm. The image below shows the compile size of the maze code with some print statements through the serial monitor. The serial code takes up 192 bytes on its own, so the maze code takes up 498-192 = 306 bytes, which includes maze information and other miscellaneous information.
This efficiency was made possible by our implicit pointers in the maze traversal data structure. You can also see this efficiency in our final robot global variable size. Of course it also helped that the FFT code was on the atmega. However, even if we didn’t have the extra chip, we would have still been able to support the FFT library on the single arduino.
The final part of the code base was the radio code. This was also mostly unchanged from milestone 3. We did add support for sending the treasure information but after deciding not to add the camera to the robot we disabled this part of the code.
Our robot ran into two different problems—one during each round—that hindered our performance during the competition.
During the first round, our radios failed to synchronize, leading there to be no transmission of data from the robot to the base station. Unfortunately, this happened to be the round where our robot performed better in the maze. It appears as so our robot successfully navigated over half of the maze successfully during this run, which could have potentially put us in the running for the final round. Although we cannot verify how accurately our robot mapped out the maze without the GUI output, we do have a video of how our robot performed during round 1:
During the second round, our robot seemed to be performing very well at the beginning as it quickly began to explore the maze and reached the opposite quadrant within 30 or so seconds. But, in the opposite quadrant there seemed to be a particular location where our robot detected a false intersection and would get lost. This happened twice in the exact same location.
Even though our robot did not perform exactly how we wanted it to on competition day, we believe that the system we developed over the course of the semester had many interesting and unique ideas and we are proud of the work that we did.
Item | Cost | Quantity | Total Cost |
---|---|---|---|
Line sensors | 3 | 3 | 9 |
IR distance sensors | 7 | 3 | 21 |
Camera | 14 | 1 | 14 |
Servos | 13 | 2 | 26 |
Arduino Uno | 16 | 1 | 16 |
atmega328p | 4.30 | 1 | 4.30 |
lm358 | 0.60 | 1 | 0.60 |
Perf board | 2.00 | 2 | 4.00 |
IR hat | 2.00 | 1 | 2.00 |
Total | 96.90 |
Is “a ban on offensive autonomous weapons beyond meaningful human control” going to work?
As technology continues to advance, more and more aspects of life and human interactions are becoming increasingly automated. From the advent of assembly lines and steam power in the first industrial revolution, to machine learning algorithms that are able to help doctors diagnose patients, less work is being done by humans and more work is being left as tasks to intelligent autonomous systems. As we continue to progress, these trends will continue to spread to other fields, including warfare. Many believe that the use of such weapons would lower the threshold for going to battle due to the disconnection that occurs between the users of the technology and the subsequent carnage and destruction brought about by it. Because of these concerns, prominent artificial Intelligence and robotics researchers—including Stephen Hawking and Elon Musk—have signed an open letter asking for a ban on offensive autonomous weapons beyond meaningful human control. A ban on offensive autonomous weapons beyond human control would not be effective for practical, ethical, and economical reasons.
Since these weapons already exist and countries are going to great lengths to develop them, it would be extremely difficult to enforce this ban. Countries want to have a stockpile of autonomous weapons, similar to nuclear weapons, because it is a winning strategy to build them and then use them as a deterrent. And since these robots can be cheap to implement, there isn’t a physical limit stopping the development of these robots. As the cost of developing and building powerful autonomous weapons continues to decrease, more and more people will have the access and ability to build these weapons. Some of these people may not have the best intentions in mind. One can imagine small groups with a bit of funding building cheap autonomous drones with a lethal explosive attached, like the fictitious Slaughterbot drones (link). The video describes a new autonomous drone with facial recognition technology equipped with an explosive charge that can seek out individuals and kill them. The point of the video is that the individual technologies to build these killer drones (facial recognition, small explosive charges, quadcopters capable of autonomous navigation) already exist, so it is feasible that a terrorist organization can build these drones in the near future. Thought experiments like this definitely raise alarm and should encourage world leaders to search for a way to deal with autonomous weapons in the future. However, the Slaughterbots video also shows how a simple ban will likely not be effective in stopping the advancement of autonomous weapons systems.
Ethically, putting a ban on autonomous weapons may put to halt to all autonomous systems. This would not be ideal because autonomous systems can do a lot of good such as self driving cars, elderly care, etc. And even if they can be seen as dangerous weapons, they are necessary for their intended purpose. For example, lasers in space can be be used as weapons for warfare but they are necessary to clean up debris. And even if placing the ban was completely ethical, it has never been effective in the past, recently shown through North Korea.
Economically, placing a ban on autonomous weapons would shut down a growing market that already has a lot of money poured into. The market includes a partial to fully autonomous weapon that requires no human interaction, meaning it doesn’t need to make decisions completely by itself to be considered autonomous (functional morality). And since many of these companies are tied to academic institutions, it would be hard to separate research from actual use.
In conclusion, while we need solutions to handle the increasing use of autonomous weapons, a flat out ban is not feasible and will likely be ineffective at preventing the development and use of autonomous weapons. Instead, we need to focus on regulating these systems and conduct research into countermeasures against various autonomous weapons.