Autonomous Systems Technology Glossary
Autonomous systems deploy across defense, logistics, healthcare, agriculture, and transportation sectors — each domain operating under distinct regulatory frameworks, technical standards, and professional qualification requirements. This glossary defines the core terminology structuring that landscape, covering decision-making constructs, sensor architectures, classification systems, and the boundaries between human and machine authority. Precise vocabulary is operationally critical: misapplied terminology in procurement, integration, or safety documentation carries regulatory and liability consequences. The Autonomous Systems Authority index provides the broader sectoral context within which these terms are applied.
Definition and scope
Autonomous systems terminology spans hardware architecture, software behavior, regulatory classification, and operational doctrine. The vocabulary is not uniform across sectors — a term used in Federal Aviation Administration drone regulations may carry different operational meaning than the same term in a Department of Defense directive or an ISO safety standard.
The primary classification framework for behavioral autonomy is the Levels of Autonomy construct, most formally developed in the automotive domain by SAE International. SAE J3016, last revised in April 2021, defines 6 discrete driving automation levels (0 through 5), where Level 0 represents no automation and Level 5 represents full automation with no human fallback requirement. This taxonomy is referenced by the National Highway Traffic Safety Administration (NHTSA) in its automated vehicle policy frameworks and has been adopted as a reference model across sectors beyond automotive.
The levels of autonomy taxonomy governs procurement language, insurance classification, and safety certification thresholds across sectors.
Key glossary terms by domain cluster:
- Autonomy Level — A discrete classification of the degree to which a system performs tasks without human intervention, defined contextually by SAE J3016 (vehicles), FAA regulatory categories (UAS), or DoD Directive 3000.09 (defense systems).
- Sensor Fusion — The algorithmic integration of data from two or more sensor modalities (LiDAR, radar, camera, IMU) to produce a unified environmental model. See sensor fusion and perception.
- Human-Machine Interface (HMI) — The physical or software boundary at which a human operator interacts with or supervises an autonomous system, governed in aviation by FAA Advisory Circular 25.1302-1.
- Lethal Autonomous Weapon System (LAWS) — A weapons platform capable of selecting and engaging targets without direct human input, subject to DoD Directive 3000.09, which requires "appropriate levels of human judgment over the use of force."
- Edge Computing Node — An onboard processing unit executing inference and control algorithms locally, without cloud round-trip latency, as described in NIST SP 800-207 architecture frameworks. See edge computing in autonomous systems.
- Digital Twin — A real-time virtual replica of a physical autonomous system or environment used for simulation, fault prediction, and performance validation. See digital twin technology.
- Geofencing — A virtual perimeter defined by GPS or RF coordinates that constrains an autonomous system's operational zone; mandatory under FAA Part 107 for certain UAS operations.
How it works
Autonomous system operation is structured around a three-phase computational loop: perception, planning, and actuation.
Perception aggregates raw data from physical sensors — LiDAR (which can resolve objects at ranges exceeding 200 meters in commercial-grade units), radar, optical cameras, and inertial measurement units — and processes it through fusion algorithms to construct a probabilistic world model. The autonomous systems technology stack details the layered architecture through which perception feeds downstream processes.
Planning converts the world model into an action sequence. This phase typically involves 3 sub-processes: route or trajectory planning (global path), behavioral prediction (modeling other agents), and motion planning (local maneuver generation). Decision-making algorithms used at this layer range from rule-based finite state machines to reinforcement learning policies validated under simulation environments defined in IEEE 2846-2022, the standard for reasonableness in automated driving safety.
Actuation translates planned commands into physical outputs — motor torque, servo position, thrust vector — through controllers designed to ANSI/RIA R15.06-2012 safety standards in industrial robotics contexts, or MIL-STD-882E in defense system contexts.
AI and machine learning in autonomous systems documents how learned models are integrated into the planning phase, including validation and certification pathways.
Common scenarios
Autonomous systems terminology is applied across five primary deployment contexts, each with distinct regulatory and vocabulary requirements:
- Unmanned Aerial Vehicles (UAVs): FAA Part 107 governs commercial UAS operations under 55 pounds, and operators must understand terms including BVLOS (Beyond Visual Line of Sight), Remote ID, and DAA (Detect and Avoid). See FAA drone regulations and unmanned aerial vehicle services.
- Autonomous Ground Vehicles: NHTSA's automated vehicles framework references SAE Level classifications in policy guidance. Terminology includes ADS (Automated Driving System), DDT (Dynamic Driving Task), and ODD (Operational Design Domain).
- Industrial Robotics: ISO 10218-1:2011 and ISO/TS 15066:2016 define collaborative robot (cobot) operational terms including power and force limiting (PFL) and speed and separation monitoring (SSM). See industrial robotics automation services.
- Defense Systems: DoD Directive 3000.09 establishes 3 weapon system categories — autonomous, semi-autonomous, and human-supervised autonomous — with distinct approval authority levels for each.
- Agricultural Robotics: USDA precision agriculture frameworks reference terms including variable rate application (VRA), autonomous guidance systems, and RTK-GPS (Real-Time Kinematic GPS) accuracy specifications typically within ±2 centimeters. See autonomous systems in agriculture.
The Robotics Architecture Authority covers the structural and systems-design vocabulary governing robotic platforms — including mechanical architecture classifications, actuation subsystem taxonomy, and the integration standards that define interoperability between robotic components and autonomous control layers. For professionals specifying or procuring robotic systems, that reference establishes the hardware-layer terminology that complements the software and behavioral glossary defined here.
Decision boundaries
Several glossary distinctions carry direct regulatory and contractual weight, requiring precise application rather than interchangeable use.
Autonomous vs. Automated: An automated system executes a fixed, pre-programmed sequence without adaptive response to environmental change. An autonomous system perceives its environment and modifies behavior in response to unscripted conditions. NIST IR 8269 treats this as a design-architecture distinction with safety and accountability implications.
Semi-Autonomous vs. Human-Supervised Autonomous (Defense context): Under DoD Directive 3000.09, semi-autonomous systems require a human operator to authorize each individual engagement. Human-supervised autonomous systems are permitted to execute actions within pre-authorized parameters without per-action confirmation. This distinction determines compliance pathway and acquisition approval authority.
Operational Design Domain (ODD) vs. Geofence: An ODD defines the full set of environmental and operational conditions within which an ADS is designed to function — including speed range, weather conditions, and road type — as described in SAE J3016. A geofence is strictly a geographic boundary, a narrower constraint that does not capture the full scope of operational parameters an ODD specifies.
Certification vs. Validation: In the context of autonomous systems safety standards, validation confirms a system performs its intended function under defined conditions; certification is a formal regulatory determination — issued by FAA, NHTSA, or a recognized Notified Body — that the system meets prescribed safety requirements. Conflating these terms in procurement or documentation creates compliance exposure.
Autonomous systems liability and insurance and federal regulations for autonomous systems apply these definitional boundaries directly to legal and regulatory compliance determinations.
References
- SAE J3016: Taxonomy and Definitions for Terms Related to Driving Automation Systems — SAE International, April 2021
- DoD Directive 3000.09: Autonomous Weapons Systems — U.S. Department of Defense
- FAA Part 107 — Small Unmanned Aircraft Systems — Electronic Code of Federal Regulations
- NHTSA Automated Vehicles Safety — National Highway Traffic Safety Administration
- [FAA Advisory Circular 25.1302-1: Installed Systems and Equipment for Use by the Flightcrew](https://rgl.faa.gov/Regulatory_and_Guidance_Library/rgAdvisoryCircular.nsf/0/d86b7bc16a47ae768625788b005