Wheelchair users face numerous challenges in navigating environments that are often designed with normal individuals in mind. This can significantly impact their health and well-being. To address this issue, we aim to develop a device that intelligently collects and analyzes data on location, mobility, and environment within accommodations. By understanding these factors, we can identify suitable activities for each space, enhancing the overall experience for wheelchair users.
Our system comprises three primary components: room-equipped BLE beacons, a wheel-mounted sensor, and a wheelchair-mounted computing unit.
- BLE Beacons: M5StickC devices, strategically placed throughout the accommodation, broadcast location information.
- Wheel-Mounted Sensor: An M5Stack Capsule, equipped with a Bosch BMI270 6-axis IMU, detects wheelchair movement via gravity angle changes. It transmits movement data over WiFi using MQTT when the threshold is exceeded.
- Wheelchair-Mounted Computing Unit: A Kria KR260 board serves as the central processing unit. Equipped with USB WiFi and Bluetooth dongles, it communicates with BLE beacons and the wheel-mounted sensor. A Logitech C920 webcam captures visual data for scene recognition by our AI model.
To prepare a development environment, we began by installing Ubuntu 22.04 Desktop LTS onto a 16GB SD card, following the official Getting Started Guide. The included micro-USB cable was used to establish a serial connection via the TeraTerm application for initial board access. To address USB device detection issues, we updated the boot firmware to version 1.02 using the xmutil command.
To establish remote access, we employed a cheap USB WiFi dongle (ours from AliExpress for less than $10) featuring the mt7601u chipset. Despite positive reviews regarding its Linux driver, we encountered initial connection issues. Resolving these required a firmware upgrade using the command:
sudo apt install --reinstall linux-firmware
Subsequently, the WiFi connection was successfully activated and managed through the nmcli command.
To streamline development, we adopted a hybrid remote approach. For efficient code editing, we leveraged the Remote Explorer extension in VS Code, establishing an SSH connection to the Kria KR260 board.
To execute code with a graphical interface, we installed TigerVNC on the board and connected remotely using RealVNC Viewer. A tip is to use the following commands on the remote terminal to enable new window instantiation:
export DISPLAY=:1
xhost +
To execute our project, we relied on a combination of system-level and Python-based software packages. The following tools were installed using their respective package managers:
- apt command: python3-opencv libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev
- pip3 command: Cython pyaudio edge_impulse_linux bleak aiomqtt
We utilized Platform.io, a popular development environment, to create firmware for the M5Stack Capsule (wheel-mounted sensor) and M5StickC (BLE beacons) boards.
- Wheel-Mounted Sensor: We opted for the M5Stack StampS3 board when setting up the M5Stack Capsule project. Essential libraries included SparkFun's BMI270 library for accessing the IMU sensor data, PubSubClient for transmitting data over MQTT, and ArduinoJSON for efficient data handling.
- BLE Beacons: For the M5StickC boards functioning as BLE beacons, a simple project was created in Platform.io. Due to the inherent BLE broadcasting capabilities of the M5StickC, no additional libraries were required.
M5StickC devices, configured as BLE iBeacon devices, broadcast location information by its name with "Room" suffix. Upon activating the USB Bluetooth dongle via the hciconfig command, we wrote a Python code on the Kria KR260 using the BleakScanner class from the Bleak library to asynchronously collect advertisement data, including device names and RSSI values, which are stored in a shared dictionary. By identifying the beacon with the strongest RSSI (Received Signal Strength Indicator), the system determines the user's current room.
The M5Stack Capsule, attached to the wheelchair's wheel center, utilizes the BMI270 accelerometer to detect wheel rotation. By sampling the X and Y-axis acceleration data at 1 Hz and calculating the gravity angle using the tangent function, we can estimate the wheelchair's movement. The accumulated angular change is converted into distance traveled, accumulated, and reported via MQTT every 10 seconds.
We leverage the public HiveMQ broker for initial testing. The Kria KR260's Python code employs the aiomqtt library, operating asynchronously to receive messages from HiveMQ. The determination of movement status (True/False) is based on an accumulated distance threshold of 1 meter.
Idea #3 scene detectionThe project's third phase involves harnessing the Kria KR260's Data Processing Units (DPUs) for hardware acceleration. By employing an object recognition model, we aim to identify items from the wheelchair user's perspective. Combining this visual data with room occupancy and movement information will enable us to interpret the user's daily activities.
We utilized a Logitech C920 webcam connected via a USB port. It's crucial to update the boot firmware to the latest version to enable USB controller 1, accommodating the two ports adjacent to the Ethernet port. To verify successful image capture, we employed Python OpenCV to process the webcam feed. Real-time image previews were conveniently achieved using VNC Viewer.
While numerous resources exist for installing Vivado and Vitis AI software platforms, our system constraints necessitated an alternative approach. We adapted the PYNQ for Kria SOMs project to install required packages directly on the board. Despite encountering challenges during the PYNQ package build process including included 16 GB SD card, we've managed to establish a functional PYNQ environment. Due to project deadlines, further exploration of this aspect will be pursued in future work.
Comments