Browse Topic: Robotics

Items (2,025)
Trajectory planning is a major challenge in robotics and autonomous vehicles, ensuring both efficient and safe navigation. The primary objective of this work is to generate an optimal trajectory connecting a starting point to a destination while meeting specific requirements, such as minimizing travel distance and adhering to the vehicle’s kinematic and dynamic constraints. The developed algorithms for trajectory design, defined as a sequence of arcs and straight segments, offer a significant advantage due to their low computational complexity, making them well-suited for real-time applications in autonomous navigation. The proposed trajectory model serves as a benchmark for comparing actual vehicle paths in trajectory control studies. Simulation results demonstrate the robustness of the proposed method across various scenarios.
Soundouss, HalimaMsaaf, MohammedBelmajdoub, Fouad
Industries that require high-accuracy automation in the creation of high-mix/low-volume parts, such as aerospace, often face cost constraints with traditional robotics and machine tools due to the need for many pre-programmed tool paths, dedicated part fixtures, and rigid production flow. This paper presents a new machine learning (ML) based vision mapping and planning technique, created to enhance flexibility and efficiency in robotic operations, while reducing overall costs. The system is capable of mapping discrete process targets in the robot work envelope that the ML algorithms have been trained to identify, without requiring knowledge of the overall assembly. Using a 2D camera, images are taken from multiple robot positions across the work area and are used in the ML algorithm to detect, identify, and predict the 6D pose of each target. The algorithm uses the poses and target identifications to automatically develop a part program with efficient tool paths, including
Langan, DanielHall, MichaelGoldberg, EmilySchrandt, Sasha
The unicycle self-balancing mobility system offers superior maneuverability and flexibility due to its unique single-wheel grounding feature, which allows it to autonomously perform exploration and delivery tasks in narrow and rough terrains. In this paper, a unicycle self-balancing robot traveling on the lunar terrain is proposed for autonomous exploration on the lunar surface. First, a multi-body dynamics model of the robot is derived based on quasi-Hamilton equations. A three-dimensional terramechancis model is used to describe the interaction between the robot wheels and the lunar soil. To achieve stable control of the robot's attitude, series PID controllers are used for pitch and roll attitude self-balancing control as well as velocity control. The whole robot model and control strategy were built in MATLAB and the robot's traveling stability was analyzed on the lunar terrain.
Shi, JunweiZhang, KaidiDuan, YupengWu, JinglaiZhang, Yunqing
Towards the goal of real-time navigation of autonomous robots, the Iterative Closest Point (ICP) based LiDAR odometry methods are a favorable class of Simultaneous Localization and Mapping (SLAM) algorithms for their robustness under any light conditions. However, even with the recent methods, the traditional SLAM challenges persist, where odometry drifts under adversarial conditions such as featureless or dynamic environments, as well as high motion of the robots. In this paper, we present a motion-aware continuous-time LiDAR-inertial SLAM framework. We introduce an efficient EKF-ICP sensor fusion solution by loosely coupling poses from the continuous time ICP and IMU data, designed to improve convergence speed and robustness over existing methods while incorporating a sophisticated motion constraint to maintain accurate localization during rapid motion changes. Our framework is evaluated on the KITTI datasets and artificially motion-induced dataset sequences, demonstrating
Kokenoz, CigdemShaik, ToukheerSharma, AbhishekPisu, PierluigiLi, Bing
With the growing diversification of modern urban transportation options, such as delivery robots, patrol robots, service robots, E-bikes, and E-scooters, sidewalks have gained newfound importance as critical features of High-Definition (HD) Maps. Since these emerging modes of transportation are designed to operate on sidewalks to ensure public safety, there is an urgent need for efficient and optimal sidewalk routing plans for autonomous driving systems. This paper proposed a sidewalk route planning method using a cost-based A* algorithm and a mini-max-based objective function for optimal routes. The proposed cost-based A* route planning algorithm can generate different routes based on the costs of different terrains (sidewalks and crosswalks), and the objective function can produce an efficient route for different routing scenarios or preferences while considering both travelling distance and safety levels. This paper’s work is meant to fill the gap in efficient route planning for
Bao, ZhibinLang, HaoxiangLin, Xianke
While numerous advancements have been made in autonomous navigation for structured indoor and outdoor environments, these solutions often do not generalize well to off-road settings. There are unique challenges in such settings such as unreliable GPS, limited computational and memory resources, and sparse environmental features, making navigation particularly difficult. In our work, we propose a novel data structure called Hierarchical Dynamic Scene Graphs (HDSG) to address these challenges. HDSG captures environmental information at different resolutions, integrating both geometric and semantic features. It enables various navigation tasks such as localization, loop closure, and human interaction through the visualization of environmental features for remote operators. We evaluated the performance of localizing a robot’s position within the world frame by comparing compact spatial descriptors extracted from semi-consecutive scene graphs, derived from 3D LiDAR point clouds. Compared to
Alam, Fardifa FathmiulLuricich, FedericoLi, NianyiJia, YunyiLi, Bing
Over the decades, robotics deployments have been driven by the rapid in-parallel research advances in sensing, actuation, simulation, algorithmic control, communication, and high-performance computing among others. Collectively, their integration within a cyber-physical-systems framework has supercharged the increasingly complex realization of the real-time ‘sense-think-act’ robotics paradigm. Successful functioning of modern-day robots relies on seamless integration of increasingly complex systems (coming together at the component-, subsystem-, system- and system-of-system levels) as well as their systematic treatment throughout the life-cycle (from cradle to grave). As a consequence, ‘dependency management’ between the physical/algorithmic inter-dependencies of the multiple system elements is crucial for enabling synergistic (or managing adversarial) outcomes. Furthermore, the steep learning curve for customizing the technology for platform specific deployment discourages domain
Varpe, Harshal BabsahebColeman, JohnSalvi, AmeyaSmereka, JonathonBrudnak, MarkGorsich, DavidKrovi, Venkat N
Test procedures such as EuroNCAP, NHTSA’s FMVSS 127, and UNECE 152 all require specific pedestrian to vehicle overlaps. These overlap variations allow the vehicle differing amounts of time to respond to the pedestrian’s presence. In this work, a compensation algorithm was developed to be used with the STRIDE robot for Pedestrian Automatic Emergency Braking tests. The compensation algorithm uses information about the robot and vehicle speeds and positions determine whether the robot needs to move faster or slower in order to properly overlap the vehicle. In addition to presenting the algorithm, tests were performed which demonstrate the function of the compensation algorithm. These tests include repeatability, overlap testing, vehicle speed variation, and abort logic tests. For these tests of the robot involving vehicle data, a method of replaying vehicle data via UDP was used to provide the same vehicle stimulus to the robot during every trial without a robotic driver in the vehicle.
Bartholomew, MeredithNguyen, AnHelber, NicholasHeydinger, Gary
Several challenges remain in deploying Machine Learning (ML) into safety critical applications. We introduce a safe machine learning approach tailored for safety-critical industries including automotive, autonomous vehicles, defense and security, healthcare, pharmaceuticals, manufacturing and industrial robotics, warehouse distribution, and aerospace. Aiming to fill a perceived gap within Artificial Intelligence and ML standards, the described approach integrates ML best practices with the proven Process Failure Mode & Effects Analysis (PFMEA) approach to create a robust ML pipeline. The solution views ML development holistically as a value-add, feedback process rather than the resulting model itself. By applying PFMEA, the approach systematically identifies, prioritizes, and mitigates risks throughout the ML development pipeline. The paper outlines each step of a typical pipeline, highlighting potential failure points and tailoring known best practices to minimize identified risks. As
Schmitt, PaulSeifert, Heinz BodoBijelic, MarioPennar, KrzysztofLopez, JerryHeide, Felix
Researchers at Universidad Carlos III de Madrid (UC3M) have developed a new soft joint model for robots with an asymmetrical triangular structure and an extremely thin central column. This breakthrough, recently patented, allows for versatility of movement, adaptability and safety, and will have a major impact in the field of robotics.
Accurate object pose estimation refers to the ability of a robot to determine both the position and orientation of an object. It is essential for robotics, especially in pick-and-place tasks, which are crucial in industries such as manufacturing and logistics. As robots are increasingly tasked with complex operations, their ability to precisely determine the six degrees of freedom (6D pose) of objects, position, and orientation, becomes critical. This ability ensures that robots can interact with objects in a reliable and safe manner. However, despite advancements in deep learning, the performance of 6D pose estimation algorithms largely depends on the quality of the data they are trained on.
Drone show accidents highlight the challenges of maintaining safety in what engineers call “multiagent systems” — systems of multiple coordinated, collaborative, and computer-programmed agents, such as robots, drones, and self-driving cars.
Advances in artificial intelligence (AI), machine learning (ML), and sensor fusion drive robotics functionality across many applications, including healthcare. Ongoing innovations in high-speed connectivity, edge computing, network redundancy, and fail-safe procedures crucial to optimizing robotics opportunities. The emergence of natural language processing and emotional AI functionality are poised to propel more intuitive, responsive, and adaptive human-machine interaction.
Los Angeles-based plastics contract manufacturer Kal Plastics deployed UR10e trimming cobot for a fraction of the cost and lead time of a CNC machine, cut trimming time nearly in half, and reduced late shipments to under one percent — all while improving employee safety and growth opportunities.
A team of engineers is on a mission to redefine mobility by providing innovative wearable solutions to physical therapists, orthotic and prosthetic professionals, and individuals experiencing walking impairment and disability. Co-founded by Ray Browning and Zach Lerner, Portland-based startup Biomotum, aims “to empower mobility by energizing every step” through their wearable robotics technology.
In creating a pair of new robots, Cornell researchers cultivated an unlikely component: fungal mycelia. By harnessing mycelia’s innate electrical signals, the researchers discovered a new way of controlling “biohybrid” robots that can potentially react to their environment better than their purely synthetic counterparts.
Researchers are developing soft sensor materials based on ceramics. Such sensors can feel temperature, strain, pressure, or humidity, for instance, which makes them interesting for use in medicine, but also in the field of soft robotics.
Researchers from the School of Engineering of the Hong Kong University of Science and Technology (HKUST) have successfully developed what they believe is the world’s smallest multifunctional biomedical robots. Capable of imaging, high-precision motion, and multifunctional operations like sampling, drug delivery, and laser ablation, the robot offers competitive imaging performance and a tenfold improvement in obstacle detection, paving the way for robotic applications in narrow and challenging channels of the human body, such as the lung’s end bronchi and the oviducts.
Robotics researchers have already made great strides in developing sensors that can perceive changes in position, pressure, and temperature — all of which are important for technologies like wearable devices and human-robot interfaces. But a hallmark of human perception is the ability to sense multiple stimuli at once, and this is something that robotics has struggled to achieve.
Researchers have developed a multifunctional sensor based on semiconductor fibers that emulates the five human senses. Prof. Bonghoon Kim, department of robotics and mechatronics engineering of Daegu Gyeongbuk Institute of Science & Technology (DGIST), conducted the study in collaboration with Prof. Sangwook Kim at KAIST, Prof. Janghwan Kim at Ajou University, and Prof. Jiwoong Kim at Soongsil University. The technology developed in the study is expected to be utilized in fields such as wearables, Internet of Things (IoT), electronic devices, and soft robotics.
Insect cyborgs may sound like science fiction, but it’s a relatively new phenomenon based on using electrical stimuli to control the movement of insects. These hybrid insect computer robots, as they are scientifically called, herald the future of small, high mobile, and efficient devices.
Soft-bending actuators are gaining considerable attention in robotics for handling delicate objects and adapting to complex shapes, making them ideal for biomimetic robots. Soft pneumatic actuators (SPAs) are preferred in soft robotics because to their safety and compliance characteristics. Using negative pressure for actuation, it enhances stability by reducing the risk of sudden or unintended movements, crucial for delicate handling and consistent performance. Negative pressure actuation is more energy-efficient, safe and are less prone to leakage, increasing reliability and durability. This paper involves development of a new soft pneumatic actuator design by comparing various designs and to determine its performance parameters. This paper depicts on designing, and fabricating flexible soft pneumatic actuators working under negative pressure for soft robotic applications. The material used for fabrication was liquid silicone rubber and uniaxial tensile tests were conducted to
Warriar J S, SreejithSadique, AnwarGeorge, Boby
Soft-bending actuators have garnered significant interest in robotics and biomedical engineering due to their ability to mimic the bending motions of natural organisms. Using either positive or negative pressure, most soft pneumatic actuators for bending actuation have modified their design accordingly. In this study, we propose a novel soft bending actuator that utilizes combined positive and negative pressures to achieve enhanced performance and control. The actuator consists of a flexible elastomeric chamber divided into two compartments: a positive pressure chamber and a negative pressure chamber. Controlled bending motion can be achieved by selectively applying positive and negative pressures to the respective chambers. The combined positive and negative pressure allowed for faster response times and increased flexibility compared to traditional soft actuators. Because of its adaptability, controllability, and improved performance can be used for various jobs that call for careful
Lalson, AbiramiSadique, Anwar
A fast and agile robotic insect developed by MIT could someday aid in mechanical pollination.
Researchers have helped create a new 3D printing approach for shape-changing materials that are likened to muscles, opening the door for improved applications in robotics as well as biomedical and energy devices.
Soft skin coverings and touch sensors have emerged as a promising feature for robots that are both safer and more intuitive for human interaction, but they are expensive and difficult to make. A recent study demonstrates that soft skin pads doubling as sensors made from thermoplastic urethane can be efficiently manufactured using 3D printers.
A team led by Emily Davidson has reported that they used a class of widely available polymers called thermoplastic elastomers to create soft 3D printed structures with tunable stiffness. Engineers can design the print path used by the 3D printer to program the plastic’s physical properties so that a device can stretch and flex repeatedly in one direction while remaining rigid in another. Davidson, an assistant professor of chemical and biological engineering, says this approach to engineering soft architected materials could have many uses, such as soft robots, medical devices and prosthetics, strong lightweight helmets, and custom high-performance shoe soles.
The permanent magnet synchronous motor (PMSM) has become the preferred driving technology in robotic control engineering due to its high-power density and excellent dynamic response capability. However, traditional vector control strategies, while widely used, reveal certain limitations due to their reliance on high-precision sensors and the complex coordinate transformation calculations. These limitations affect the performance of robots in high-speed environments. This paper proposes a decoupling design for the PMSM current loop based on Internal model control (IMC), aiming to improve control accuracy and response speed by simplifying the control algorithm. This new strategy not only maintains the basic framework of vector control but also enhances the dynamic performance of the system through effective decoupling. Simulations conducted using Simulink demonstrate that this strategy significantly improves system stability and dynamic response speed, achieving more precise and rapid
Chen, HaoHuan, DiGong, ChaoLiu, Chenliang
Nowadays, there are many technologies emerging like firefighting robots, quadcopters, and drones which are capable of operating in hazardous disaster scenarios. In recent years, fire emergencies have become an increasingly serious problem, leading to hundreds of deaths, thousands of injuries, and the destruction of property worth millions of dollars. According to the National Crime Records Bureau (NCRB), India recorded approximately 1,218 fire incidents resulting in 1,694 deaths in 2020 alone. Globally, the World Health Organization (WHO) estimates that fires account for around 265,000 deaths each year, with the majority occurring in low- and middle-income countries. The existing fire-extinguishing systems are often inefficient and lack proper testing, causing significant delays in firefighting efforts. These delays become even more critical in situations involving high-rise buildings or bushfires, where reaching the affected areas is particularly challenging. The leading causes of
Karthikeyan, S.Nithish, U.Sanjay, S.Sibiraj, T.Vishnu, J.
Researchers have developed a fully embedded wireless brain neural signal recorder. The device was created by Prof. Jang Kyung-in of the department of robotics and mechanical electronics at DGIST in collaboration with a research team led by Lee Young-jeon of the Korea Research Institute of Bioscience & Biotechnology.
Need a moment of levity? Try watching videos of astronauts falling on the Moon. NASA’s outtakes of Apollo astronauts tripping and stumbling as they bounce in slow motion are delightfully relatable. For MIT engineers, the lunar bloopers also highlight an opportunity to innovate.
While there are concerned voices speaking of a certain disillusionment with AI, as with any potentially game-changing technology, the expectations and fears are often exaggerated. Specifically in relation to industrial robotics, the potential of AI’s impact has, if anything, been muted.
Inspired by a small and slow snail, scientists have developed a robot prototype that may one day scoop up microplastics from the surfaces of oceans, seas, and lakes. The robot’s design is based on the Hawaiian apple snail (Pomacea canaliculate), a common aquarium snail that uses the undulating motion of its foot to drive water surface flow and suck in floating food particles.
Researchers have developed a new soft robot design that engages in three simultaneous behaviors: rolling forward, spinning like a record, and following a path that orbits around a central point. The device, which operates without human or computer control, holds promise for developing soft robotic technologies that can be used to navigate and map unknown environments.
A team led by University of Maryland computer scientists invented a camera mechanism that improves how robots see and react to the world around them. Inspired by how the human eye works, their innovative camera system mimics the tiny involuntary movements used by the eye to maintain clear and stable vision over time. The team’s prototyping and testing of the camera — called the Artificial Microsaccade-Enhanced Event Camera (AMI-EV) — was detailed in a paper published in the journal Science Robotics in May 2024.
This research, path planning optimization of the deep Q-network (DQN) algorithm is enhanced through integration with the enhanced deep Q-network (EDQN) for mobile robot (MR) navigation in specific scenarios. This approach involves multiple objectives, such as minimizing path distance, energy consumption, and obstacle avoidance. The proposed algorithm has been adapted to operate MRs in both 10 × 10 and 15 × 15 grid-mapped environments, accommodating both static and dynamic settings. The main objective of the algorithm is to determine the most efficient, optimized path to the target destination. A learning-based MR was utilized to experimentally validate the EDQN methodology, confirming its effectiveness. For robot trajectory tasks, this research demonstrates that the EDQN approach enables collision avoidance, optimizes path efficiency, and achieves practical applicability. Training episodes were implemented over 3000 iterations. In comparison to traditional algorithms such as A*, GA
Arumugam, VengatesanAlagumalai, VasudevanRajendran, Sundarakannan
LIDAR-based autonomous mobile robots (AMRs) are gradually being used for gas detection in industries. They detect tiny changes in the composition of the environment in indoor areas that is too risky for humans, making it ideal for the detection of gases. This current work focusses on the basic aspect of gas detection and avoiding unwanted accidents in industrial sectors by using an AMR with LIDAR sensor capable of autonomous navigation and MQ2 a gas detection sensor for identifying the leakages including toxic and explosive gases, and can alert the necessary personnel in real-time by using simultaneous localization and mapping (SLAM) algorithm and gas distribution mapping (GDM). GDM in accordance with SLAM algorithm directs the robot towards the leakage point immediately thereby avoiding accidents. Raspberry Pi 4 is used for efficient data processing and hardware part accomplished with PGM45775 DC motor for movements with 2D LIDAR allowing 360° mapping. The adoption of LIDAR-based AMRs
Feroz Ali, L.Madhankumar, S.Hariush, V.C.Jahath Pranav, R.Jayadeep, J.Jeffrey, S.
Hemming is an incremental joining technique used in the automotive industry, it involves bending the flange of an outer panel over an inner panel to join two sheet metal panels. Different hemming methods are available such as Press die hemming, Table-top hemming and Robot roller hemming. Robot roller hemming is superior to press hemming and tabletop hemming because of its ability to hem complex-shaped parts and is typically used in low-volume automotive production lines. For higher production volumes, such as 120 Jobs per Hour (JPH), press hem or tabletop hem is generally preferred. However, to achieve high-volume production from roller hemming method multi station setup is used. This static multi station setup can be configured into a Turntable setup. This new method eliminates the robot load and unload time at each station in the existing setup, resulting in a 40% increase in hemming robot utilization. Therefore, this process reduces the number of robots and the required floor space
Raju, GokulRoy, AmlanSahu, ShishirPalavelathan, Gowtham RajJagadeesh, NagireddiChava, Seshadri
Researchers and engineers at the U.S. Army Combat Capabilities Development Command Chemical Biological Center have developed a prototype system for decontaminating military combat vehicles. U.S. Army Combat Capabilities Development Command, Aberdeen Proving Ground, MD The U.S. Army Combat Capabilities Development Command Chemical Biological Center (DEVCOM CBC) is paving the way and helping the Army transform into a multi-domain force through its modernization and priority research efforts that are linked to the National Defense Strategy and nation's goals. CBC continues to lead in the development of innovative defense technology, including autonomous chem-bio defense solutions designed to enhance accuracy and safety to the warfighter.
The future of wireless technology - from charging devices to boosting communication signals - relies on the antennas that transmit electromagnetic waves becoming increasingly versatile, durable and easy to manufacture. Researchers at Drexel University and the University of British Columbia believe kirigami, the ancient Japanese art of cutting and folding paper to create intricate three-dimensional designs, could provide a model for manufacturing the next generation of antennas. Recently published in the journal Nature Communications, research from the Drexel-UBC team showed how kirigami - a variation of origami - can transform a single sheet of acetate coated with conductive MXene ink into a flexible 3D microwave antenna whose transmission frequency can be adjusted simply by pulling or squeezing to slightly shift its shape.
Researchers have successfully demonstrated the four-dimensional (4D) printing of shape memory polymers in submicron dimensions that are comparable to the wavelength of visible light. 4D printing enables 3D-printed structures to change their configurations over time and is used in a variety of fields such as soft robotics, flexible electronics, and medical devices.
Penn Engineers have developed a new algorithm that allows robots to react to complex physical contact in real time, making it possible for autonomous robots to succeed at previously impossible tasks, like controlling the motion of a sliding object.
Researchers led by Professor Young Min Song from the Gwangju Institute of Science and Technology (GIST) have unveiled a vision system inspired by feline eyes to enhance object detection in various lighting conditions. Featuring a unique shape and reflective surface, the system reduces glare in bright environments and boosts sensitivity in low-light scenarios. By filtering unnecessary details, this technology significantly improves the performance of single-lens cameras, representing a notable advancement in robotic vision capabilities.
Items per page:
1 – 50 of 2025