Skip to content

Commit

Permalink
precommit
Browse files Browse the repository at this point in the history
  • Loading branch information
sgbaird committed Oct 9, 2024
1 parent d29755c commit ccd5294
Show file tree
Hide file tree
Showing 14 changed files with 243 additions and 161 deletions.
32 changes: 20 additions & 12 deletions docs/courses/hello-world/1.5-data-logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,9 +119,9 @@ for _ in range(num_retries):
txt = str(response.text)
status_code = response.status_code

if status_code != 200:
print("Retrying in 5 seconds...")
time.sleep(5)
if status_code != 200:
print("Retrying in 5 seconds...")
time.sleep(5)

print(f"Response: ({status_code}), msg = {txt}")

Expand Down Expand Up @@ -151,23 +151,31 @@ import json
import pymongo
import os


def lambda_handler(event, context):
try:
client = pymongo.MongoClient(os.environ['MONGODB_URI'])
client = pymongo.MongoClient(os.environ["MONGODB_URI"])
db = client["your_database_name"]
collection = db["your_collection_name"]
body = json.loads(event['body'])

body = json.loads(event["body"])
result = collection.insert_one(body)

return {
'statusCode': 200,
'body': json.dumps({'message': 'Document inserted successfully', 'id': str(result.inserted_id)})
"statusCode": 200,
"body": json.dumps(
{
"message": "Document inserted successfully",
"id": str(result.inserted_id),
}
),
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps({'message': 'Error inserting document', 'error': str(e)})
"statusCode": 500,
"body": json.dumps(
{"message": "Error inserting document", "error": str(e)}
),
}
```

Expand Down Expand Up @@ -248,4 +256,4 @@ https://q.utoronto.ca/courses/350933/assignments/1274182?display=full_width
https://q.utoronto.ca/courses/351407/assignments/1286935?display=full_width
:::
::::
::::
2 changes: 1 addition & 1 deletion docs/courses/hello-world/1.5.1-aws-lambda-read.json
Original file line number Diff line number Diff line change
Expand Up @@ -141,4 +141,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
2 changes: 0 additions & 2 deletions docs/courses/hello-world/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,5 +81,3 @@ Begin
```

<script async defer src="https://buttons.github.io/buttons.js"></script>


92 changes: 48 additions & 44 deletions docs/courses/robotics/3.4-mobile-robotics.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,50 +15,50 @@ In this module, you will develop software to:

### Bill of Materials

- [MyCobot 280 Pi: - World's Smallest and Lightest Six-Axis Collaborative Robot](https://shop.elephantrobotics.com/en-ca/collections/mycobot-280/products/mycobot-pi-worlds-smallest-and-lightest-six-axis-collaborative-robot)
- [MyCobot 280 Pi: - World's Smallest and Lightest Six-Axis Collaborative Robot](https://shop.elephantrobotics.com/en-ca/collections/mycobot-280/products/mycobot-pi-worlds-smallest-and-lightest-six-axis-collaborative-robot)
A versatile and compact six-axis robot, ideal for mobile robotics applications.

- [Camera Flange 2.0](https://shop.elephantrobotics.com/en-ca/collections/camera-modules/products/camera-flange-2-0)
- [Camera Flange 2.0](https://shop.elephantrobotics.com/en-ca/collections/camera-modules/products/camera-flange-2-0)
Used for vision-based tasks in mobile robotics, such as object recognition and navigation.

- [Adaptive Gripper](https://shop.elephantrobotics.com/en-ca/collections/grippers/products/adaptive-gripper)
- [Adaptive Gripper](https://shop.elephantrobotics.com/en-ca/collections/grippers/products/adaptive-gripper)
A flexible gripper designed for precise manipulation and picking tasks in collaborative robotic systems.

- [G-Shape Base 2.0](https://shop.elephantrobotics.com/en-ca/collections/fixed-bases/products/g-shape-base-2-0)
- [G-Shape Base 2.0](https://shop.elephantrobotics.com/en-ca/collections/fixed-bases/products/g-shape-base-2-0)
Provides a sturdy mounting platform for the MyCobot, ensuring stability during robotic operations.

- [LIDAR sensor for obstacle detection]
- [LIDAR sensor for obstacle detection]
Used for real-time obstacle detection and mapping in mobile robotics.

- Printed AprilTags (can be generated and printed from [AprilTag Generation](https://github.com/AprilRobotics/apriltag-generation))

- USB-A to micro USB-B cable:
- USB-A to micro USB-B cable:
Used to connect and power devices such as the Raspberry Pi or peripherals.

- SD Card with Raspbian OS:
Pre-loaded with the Raspbian OS for use with the Raspberry Pi to facilitate software installations and configurations.

### Required Software

- [Prefect](https://www.prefect.io/)
- [Prefect](https://www.prefect.io/)
A workflow orchestration tool used to manage and coordinate asynchronous tasks in the system.

- [AprilTags Python Library](https://pypi.org/project/apriltag/)
- [AprilTags Python Library](https://pypi.org/project/apriltag/)
A computer vision library for identifying and tracking AprilTags, used for spatial referencing and navigation.

- [ROS2 Humble](https://docs.ros.org/en/humble/Installation.html)
- [ROS2 Humble](https://docs.ros.org/en/humble/Installation.html)
ROS2 (Robot Operating System) is an open-source framework for building robot applications. **Humble Hawksbill** is the currently recommended version due to its stability and long-term support.

- [Ubuntu 22.04 LTS](https://releases.ubuntu.com/22.04/)
- [Ubuntu 22.04 LTS](https://releases.ubuntu.com/22.04/)
Ubuntu 22.04 LTS is the recommended Linux distribution for running ROS2, as it provides a stable and compatible environment. **Ubuntu 24.04** (the latest version) may encounter compatibility issues with ROS2 and other tools.


### Documentation

- [MyCobot Pi Documentation](https://docs.elephantrobotics.com/docs/gitbook-en/2-serialproduct/2.1-280/2.1.2-PI.html)
- [MyCobot Pi Documentation](https://docs.elephantrobotics.com/docs/gitbook-en/2-serialproduct/2.1-280/2.1.2-PI.html)
Detailed guide on setting up and operating the MyCobot Pi.

- [Gripper Control via Python](https://docs.elephantrobotics.com/docs/gitbook-en/7-ApplicationBasePython/7.5_gripper.html)
- [Gripper Control via Python](https://docs.elephantrobotics.com/docs/gitbook-en/7-ApplicationBasePython/7.5_gripper.html)
Guide for controlling the adaptive gripper using Python commands.


Expand All @@ -71,7 +71,7 @@ First, you will learn how to use ROS to control a mobile cobot. ROS is an open-s

#### ROS2 Overview

ROS2 (Robot Operating System 2) is an extension of the original ROS framework, designed to be modular and scalable for building complex robotic systems. It is composed of numerous **software packages**, which offer a wide range of functionalities like hardware abstraction, device control, message-passing, and software development tools.
ROS2 (Robot Operating System 2) is an extension of the original ROS framework, designed to be modular and scalable for building complex robotic systems. It is composed of numerous **software packages**, which offer a wide range of functionalities like hardware abstraction, device control, message-passing, and software development tools.

ROS2 heavily relies on the **Ubuntu Linux** operating system, and it is recommended to use **Ubuntu 22.04 LTS** for stability and compatibility. One of ROS2’s key advantages is its distributed architecture, which allows different components (nodes) to communicate with each other over a network. This distributed nature is essential for enabling flexibility in robot systems development.

Expand Down Expand Up @@ -207,7 +207,7 @@ microscope = MicroscopeDemo(
def detect_apriltag(frame):
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
tags = at_detector.detect(gray)
for tag in tags:
if tag.tag_id == 1: # Assuming we're looking for tag with ID 1
return tag.center
Expand All @@ -230,13 +230,13 @@ def microscope_scan():
microscope.focus()
image = microscope.take_image()
image.show()
print("Scanning area...")
scan_images = microscope.scan([2000, 2000], [-2000, -2000])
for i, img in enumerate(scan_images):
img.show()
print(f"Showing scan image {i+1}/{len(scan_images)}")
microscope.move(0, 0)
def main():
Expand All @@ -254,7 +254,7 @@ def main():
move_arm_to_target(x, y)
grab_object()
print("Object grasped")
# Perform microscope scan
microscope_scan()
break
Expand Down Expand Up @@ -334,56 +334,56 @@ import actionlib
class MyCobotNavigator:
def __init__(self):
rospy.init_node('mycobot_navigator', anonymous=True)

# Initialize MyCobot
self.mycobot = MyCobot("/dev/ttyUSB0", 115200)

# ROS Publishers
self.cmd_vel_pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10)
self.goal_pub = rospy.Publisher('/move_base_simple/goal', PoseStamped, queue_size=10)

# ROS Subscribers
rospy.Subscriber('/camera/rgb/image_raw', Image, self.image_callback)
rospy.Subscriber('/camera/depth/points', PointCloud2, self.pointcloud_callback)
rospy.Subscriber('/map', OccupancyGrid, self.map_callback)
rospy.Subscriber('/detected_objects', DetectedObjectsArray, self.object_detection_callback)

# Initialize CV Bridge
self.bridge = CvBridge()

# Initialize MoveBase Action Client
self.move_base_client = actionlib.SimpleActionClient('move_base', MoveBaseAction)
self.move_base_client.wait_for_server()

# Initialize TF listener
self.tf_listener = tf.TransformListener()

# State variables
self.current_image = None
self.current_pointcloud = None
self.current_map = None
self.detected_objects = []
self.navigation_goal = None

def image_callback(self, msg):
try:
self.current_image = self.bridge.imgmsg_to_cv2(msg, "bgr8")
self.process_image()
except Exception as e:
rospy.logerr(f"Error processing image: {e}")

def pointcloud_callback(self, msg):
self.current_pointcloud = msg
self.process_pointcloud()

def map_callback(self, msg):
self.current_map = msg
self.update_navigation_plan()

def object_detection_callback(self, msg):
self.detected_objects = msg.objects
self.process_detected_objects()

def process_image(self):
if self.current_image is not None:
# Perform visual navigation tasks
Expand All @@ -395,54 +395,54 @@ class MyCobotNavigator:
for line in lines:
x1, y1, x2, y2 = line[0]
cv2.line(self.current_image, (x1, y1), (x2, y2), (0, 255, 0), 2)

cv2.imshow("Navigation View", self.current_image)
cv2.waitKey(1)

def process_pointcloud(self):
if self.current_pointcloud is not None:
# Process pointcloud for obstacle detection
# This is a placeholder for actual pointcloud processing
rospy.loginfo("Processing pointcloud for obstacle detection")

def process_detected_objects(self):
for obj in self.detected_objects:
rospy.loginfo(f"Detected object: {obj.label} at position: {obj.pose.position}")
# Implement logic to interact with detected objects
if obj.label == "target_object":
self.plan_path_to_object(obj.pose)

def plan_path_to_object(self, object_pose):
goal = MoveBaseGoal()
goal.target_pose.header.frame_id = "map"
goal.target_pose.header.stamp = rospy.Time.now()
goal.target_pose.pose = object_pose

self.move_base_client.send_goal(goal)
rospy.loginfo("Sent goal to move_base")

def update_navigation_plan(self):
if self.navigation_goal is not None:
self.plan_path_to_goal(self.navigation_goal)

def plan_path_to_goal(self, goal_pose):
start_pose = PoseStamped()
start_pose.header.frame_id = "base_link"
start_pose.header.stamp = rospy.Time.now()

try:
transformed_pose = self.tf_listener.transformPose("map", start_pose)
# Here you would typically call a path planning service
# For this example, we'll just log the intent
rospy.loginfo(f"Planning path from {transformed_pose.pose.position} to {goal_pose.pose.position}")
except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException) as e:
rospy.logerr(f"TF Error: {e}")

def move_arm_to_target(self, x, y, z):
# Move MyCobot arm to target position
self.mycobot.send_coords([x, y, z, 0, 0, 0], 50, 1) # Adjust speed and mode as needed
rospy.loginfo(f"Moving arm to coordinates: ({x}, {y}, {z})")

def run(self):
rate = rospy.Rate(10) # 10 Hz
while not rospy.is_shutdown():
Expand Down Expand Up @@ -485,25 +485,29 @@ from prefect import task, Flow
import time
import random
@task
def control_robot():
for _ in range(10):
print("Moving the robot forward...")
time.sleep(1)
@task
def monitor_sensors():
for _ in range(10):
sensor_value = random.uniform(0.0, 2.0) # Simulating sensor data
print(f"LIDAR sensor reading: {sensor_value}")
time.sleep(1)
@task
def analyze_data():
for _ in range(10):
print("Analyzing real-time data...")
time.sleep(1)
with Flow("Asynchronous Robotics Control") as flow:
control_robot()
monitor_sensors()
Expand Down Expand Up @@ -532,7 +536,7 @@ To use this script you need to:
### Define Asynchrony in the Context of Hardware Control
Asynchrony in robotics refers to executing tasks independently and concurrently. In the context of hardware control, this is critical for managing complex processes in an autonomous laboratory, where sensors must continually gather data, robots must execute movement commands, and the system needs to react in real-time.
Asynchrony in robotics refers to executing tasks independently and concurrently. In the context of hardware control, this is critical for managing complex processes in an autonomous laboratory, where sensors must continually gather data, robots must execute movement commands, and the system needs to react in real-time.
For instance, while a robot is moving, it must be able to continuously monitor its surroundings (e.g., using LIDAR or cameras) and adjust its trajectory without pausing or waiting for other tasks to complete.
Expand Down Expand Up @@ -593,14 +597,14 @@ bridge = CvBridge()
def image_callback(msg):
# Convert ROS image messages to OpenCV images
img = bridge.imgmsg_to_cv2(msg, "bgr8")
# Use OpenCV to detect pill bottles (e.g. by color or shape)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)
# Use OpenCV contour detection for medicine bottles
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
x, y, w, h = cv2.boundingRect(contour)
if w > 30 and h > 30: # Assume the size of the bottle is within this range
Expand Down Expand Up @@ -650,11 +654,11 @@ def move_robot():
rospy.init_node('robot_control_node')
pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10)
rate = rospy.Rate(10)
twist = Twist()
twist.linear.x = 1.0 # Move forward
twist.angular.z = 0.5 # Rotation
while not rospy.is_shutdown():
pub.publish(twist)
rate.sleep()
Expand Down
Loading

0 comments on commit ccd5294

Please sign in to comment.