Bridge Repair Robot

FAQ

Common questions about the 2025 Capstone Bridge Repair Robot project.
Question: Is this system meant for real bridges? >

Answer: This particular system is not. It's a lab-scale prototype meant to demonstrate the scanning, control, and safety features that a full-scale real world system would need to operate.

Question: How does the robot identify rust or corrosion on the beam surface? >

Answer: Currently, the computer uses a simple webcam with OpenCV to detect the color range of rust. In future revisions, this module can be swapped with one that specifically detects rust using an image recognition model.

Question: Does the robot's paint can need to be reloaded often? >

Answer: Based on our analysis, the robot can operate in a cycle for over half an hour continuously while repairing rust spots before the spray paint would need to be replaced. This far exceeds our key performance metric of operating for ten minutes without user interaction.

Question: Why does the robot stop to spray rather than using one long motion for larger rust spots? >

Answer: In the system's current form, the robot does not move quickly enough to enable a continuous motion while constantly spraying paint, which means the system would waste paint and over-cover areas. This is because the system currently only uses one type of motion with a 50 mm/s linear speed associated with it. In the future, this could be easily modified, but more considerations for safety and paint consistency would have to be made.

Question: What prevents the robot from moving unsafely or crashing? >

Answer: The system uses strict software-defined work zones based on tool orientation, and FANUC's built in motion limits to block unsafe commands. If the robot ever approaches an invalid position or loses sensor data, the controller automatically stops movement to prevent collisions. Any commands sent by a user which fall outside the safe motion zones will require manual override to execute, and in an automatic program, unsafe targets will simply be discarded from the command queue.

Question: How do lighting changes affect color-based detection? >

Answer: In different lighting conditions, the colors that the targeting camera can see will change. This is solved using a target color calibration program after moving the robot to new spaces, such as from our lab to the SHED. It allows us to find a new range of target rgb values to set in our configuration file, which is then used in the detection algorithm.

Question: Can the robot adapt to differently shaped beams or varying sizes of corrosion patches? >

Answer: For now, the system is limited to a flat beam surface, and patches the width of a single spray-paint stroke. These limitations are both solvable for future iterations of the system. Since the motion and targeting algorithms can be easily modified to run on different planes, adding a surface scanner which dynamically adjusts the target surface would solve the first issue. For larger target detection and spraying, a more robust scanning algorithm is needed.