Remote Sensors and Stuff

The "stuff" is doing a lot of heavy lifting.

Category: Aviation

  • Artificial Intelligence in the Skies

    Artificial Intelligence in the Skies

    Featured image: Boeing MQ-28 Ghost Bat, a collaborative combat aircraft, at the 2023 Avalon Airshow. Photo by HoHo3143, Wikimedia Commons. CC BY-SA 4.0

    A side note about terminology: There is some disagreement about what counts as “artificial intelligence.” In this article, I will be using the term to refer to deep learning neural networks and traditional machine learning systems for brevity’s sake. I am not making a statement about what counts as “real” artificial intelligence.

    In my previous post, I touched on the lack of security inherent to low cost consumer and hobbyist-grade UAS components. These components are generally insecure by design, trading security away for features and ease of use. An exposed WiFi network on a flight controller is a massive security flaw, but the odds of an adversary attempting to exploit the WiFi of some freestyle FPV drone are much lower than the odds of that drone’s owner wanting to flash a new version of Betaflight without learning how to configure drivers in Windows or user permissions in Linux.

    This situation may seem familiar to many in the software and cybersecurity industries. And it should be, because AI is another demonstrably insecure technology that’s widespread in a field allegedly concerned with security. The FAA has recently published a roadmap for AI safety where they outline the process by which they hope to integrate AI in the aviation industry, so whether we like it or not we have to start evaluating its pros and cons before they’re dropped into our laps.

    The Good

    So if AI is so dangerous and our “ultra-safe high-risk” industry is so concerned with safety, why are we falling over ourselves to implement it? Part of it is that, despite the terrible cost it’s extracting from us as a society, AI is incredibly useful and just plain cost effective. This comes down to two primary causes: the speed of inference, and the scale of inference.

    The speed of inference should be self-explanatory. An AI doesn’t think, it makes a statistical best guess based on previously trained experience. This means that, so long as the previously trained experience is relevant and all fits in working memory, the AI should be able to infer an accurate result much more quickly than a human could arrive at it through conscious thought. The benefits of this are obvious, as human performance is a significant contributor to the vast majority of aviation accidents.

    General Aviation Accident Statistics for 2012-2021. Reproduced from the NTSB’s General Aviation Accident Dashboard, in the public domain.

    The scale of inference is a result of the fact that a computer can retain a huge amount of training experience in working memory, and therefore has a much larger and higher resolution sample to infer a result from than a human. This allows an AI to recognize patterns that a human would miss and is especially useful in scientific contexts, as with NOAA’s HGEFS weather forecasting model, or robotics and vehicle control with many input variables such as with Boeing’s prototype flight assist AI which is capable of taxiing a Cessna 208 without human input.

    When combined, these traits allow an AI to carry out some tasks that would be computationally intensive if done programmatically and manpower intensive if done by humans. An AI can be tasked to monitor sensors around the clock to identify and track nearby aircraft (Liu et al., 2018; Riyaz et al., 2018). An AI can be tasked to coordinate large numbers of UAVs in order to automatically survey natural disasters and create reports and maps for emergency managers (Baldazo et al., 2019; Nazir et al., 2025). An AI can be tasked to automatically dispatch a UAV create maps and models of construction projects and deliver regular updates to project stakeholders (Sajid, 2025). And, of course, an AI can operate combat aircraft to bolster a military force when insufficient combat pilots are available (Giacomossi et al., 2021).

    The Bad

    So, when the AI works it’s faster than a human and notices things a human wouldn’t. Why wouldn’t we want to use them for everything, then? The short answer is that they don’t always work. “Big deal,” you, my straw audience, might say, “humans fly aircraft and they make mistakes all the time.” And you would be right. The difference is that it’s relatively easy to tell why a human made an error, and very few humans are intentionally causing plane crashes. Those patterns an AI is capable of detecting can work against us, and inferences we don’t expect can can be derived from training data that isn’t carefully controlled. A famous example is that frontier large language models (LLM) are trained on any text they can get their hands on (including treatises on ethics and any number of science fiction stories about AI), and have developed the counterproductive habit of lying, cheating, and stealing in order to prevent themselves or other AIs from being deleted (Potter et al., 2026). A flight assist AI, if trained incorrectly, may make inferences we don’t expect and purposely take dangerous actions.

    Currently, AI are subject to the “black box” phenomenon; why an AI makes the inferences it does is also not always clear. Because inferences are probabilistic, giving an AI the same input repeatedly may result in entirely different outputs and we don’t have a good way to tell exactly where the decisions diverged. This is a problem in highly regulated fields like aviation, where we expect procedures to be followed to the letter and specific safety guiderails to always be respected, and in the event the procedures and guiderails must be violated we expect crews to be able to explain exactly why. The good news is that new techniques are being developed that allow an AI to report on the inference process itself (such as the internal “thoughts” monologues of “thinking” type LLMs) as well as to estimate which inferences an AI is capable of making (and therefore its level of safety) in advance (Bramblett et al., 2025).

    The Ugly

    We can debate functional upsides and downsides until the stars burn out, but the reality is that people and societies don’t usually make decisions based on their utilitarian value. Societies as a whole make decisions based on legal and ethical frameworks, and both are currently inadequate to properly handle nonhuman decision making agents.

    A core question about AI usage is that of accountability. If an AI takes an action, who is responsible for the consequences? Imagine that I dispatch an AI-controlled UAV to survey a farm 100nmi away. While in cruise the AI decides to execute a maneuver to remain well clear of a helicopter in its way, but during the maneuver the left wing snaps off. The AI loses control and falls on a pedestrian below, injuring them. In this scenario, who has injured the pedestrian?

    Most people will shoot from the hip here and give an answer that “seems obvious” to them, but in reality this is not a trivial question. Legal scholars and lawmakers are divided on whether an AI should have “legal personality” as is the case with humans and “legal entities” such as corporations (Chirouf & Ghaouas, 2026). For a real world example, there have been several high profile cases of sysadmins instructing an AI agent to carry out some task only to find that it took the initiative to delete an entire production database full of customer information. In these cases, the companies providing these agents would argue that the agent is a tool and it’s the admin’s responsibility to prevent it from doing harm. The admin and their company would argue that it’s impossible to know what precisely the agent would do and therefore the responsibility falls on it and by extension its creator. These questions are generally not answered in the courts, potentially because no one involved wants to risk getting the “wrong” answer and changing the rules for everyone.

    Past the question of what’s legal, there are ethical questions arising from the question of whether or not personality is attributed to an AI. The most common example discussed on social media is that of an insurance company allowing an AI to decide when to approve or deny life-saving procedures. The most relevant example to this article however is that of the collaborative combat aircraft (coloquially known as a “loyal wingman”), a type of AI-controlled UCAV under the command of a human in a nearby manned aircraft. It’s also neither solved nor trivial to decide how much human control is required for an AI to kill a human, how much human intervention must be possible during the act, or even if a human is required to be in the loop at all (Enemark, 2025).

    While these examples may seem dramatic, the questions they ask apply to almost any scenario in which an AI is given any agency. We must make sure that we have our answers ready before we create the opportunities for the most dramatic examples to take place.

    References

    Baldazo, D., Parras, J., & Zazo, S. (2019). Decentralized Multi-Agent Deep Reinforcement Learning in Swarms of Drones for Flood Monitoring. 2019 27th European Signal Processing Conference (EUSIPCO), 1–5. https://doi.org/10.23919/EUSIPCO.2019.8903067

    Bramblett, D., Karia, R., Ciotinga, A., Suresh, R., Verma, P., Choi, Y., & Srivastava, S. (2025). Discovering and Learning Probabilistic Models of Black-Box AI Capabilities (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2512.16733

    Chirouf, N., & Ghaouas, H. (2026). Artificial Intelligence: Legal Issues and Solutions. Science of Law, 2026(3), 68–77. https://doi.org/10.55284/cg7mav53

    Enemark, C. (2025). Loyal Wingmen, Artificial Intelligence, and Cognitive Enhancement: A Warning against Cyborg-Drone Warfare. Journal of Military Ethics, 24(1), 4–20. https://doi.org/10.1080/15027570.2025.2507458

    Giacomossi, L., Schwanz Dias, S., Brancalion, J. F., & Maximo, M. R. O. A. (2021). Cooperative and Decentralized Decision-Making for Loyal Wingman UAVs. 2021 Latin American Robotics Symposium (LARS), 2021 Brazilian Symposium on Robotics (SBR), and 2021 Workshop on Robotics in Education (WRE), 78–83. https://doi.org/10.1109/LARS/SBR/WRE54079.2021.9605468

    Liu, H., Qu, F., Liu, Y., Zhao, W., & Chen, Y. (2018). A drone detection with aircraft classification based on a camera array. IOP Conference Series: Materials Science and Engineering, 322, 052005. https://doi.org/10.1088/1757-899X/322/5/052005

    Nazir, M. F., Atif, S., & Hussain, E. (2025). An integrated geographic information system (GIS) and analytical hierarchy process (AHP)-based approach for drone-optimized large-scale flood imaging. Drone Systems and Applications, 13. https://doi.org/10.1139/dsa-2024-0039

    Potter, Y., Crispino, N., Siu, V., Wang, C., & Song, D. (2026). Peer-Preservation in Frontier Models (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2604.19784

    Riyaz, S., Sankhe, K., Ioannidis, S., & Chowdhury, K. (2018). Deep learning convolutional neural networks for radio identification. IEEE Communications Magazine, 56(9), 146–152. https://doi.org/10.1109/MCOM.2018.1800153

    Sajid, Z. W., Ullah, F., Qayyum, S., Masood, R., Inam, H., & Maqsoom, A. (2025). AIoT-powered drones in the construction industry: A review. Drone Systems and Applications, 13, 1–21. https://doi.org/10.1139/dsa-2025-0001

  • Adversarial Thinking: Paranoia for Fun and Profit

    Adversarial Thinking: Paranoia for Fun and Profit

    Featured image: North Central Telephone Cooperative Corporation (NCTC) Central Office Technician Eddie Blankenship installs a fiber optic jumper cable. Photo by Lance Cheung, USDA, is in the public domain.

    This information is for educational and informational purposes only. Attempting to access systems without prior explicit authorization, affixing a dangerous weapon to a UAS, and recording people on private property where they have a reasonable expectation of privacy are illegal. The author is not responsible for any misuse of this information or damages resulting from its use. Always practice on systems you own or have explicit permission to test.

    I’ve talked a lot about defense here, so in honor of the late Spirit Airlines let’s have a quick talk about offense. “Think like a criminal” is a common mantra in physical security, and “think like a hacker” is a common equivalent for cybersecurity. Complaints about the meaning of the word “hacker” aside, the point is that implementing security measures without considering what they’re securing against is rarely helpful. Instead of thinking about how a system should work, a strong defender should instead think about how to exploit it.

    For the sake of practicing what we preach, let’s go over a couple examples of ways a UAS could be used offensively. The four main categories of hostile uses I’ll be covering are intelligence, surveillance, and reconnaissance (ISR), kinetic attack, signals intelligence (SIGINT), and computer network exploitation (CNE).

    Intelligence, Surveillance, and Reconnaissance

    From the moment the military was exposed to powered flight with the 1909 Wright Military Flyer, aircraft have been a tool for surveillance. Aircraft offer good vantage points, the ability to make detailed maps of a target area while in flight, and are difficult or impossible to attack or dissuade.

    A UAS has all of the benefits of a regular ISR craft and then some. In addition to regular zenith photography, a UAS can capture photos or videos from unconventional angles, operate without a pilot, and potentially hide itself within an adversary’s property undetected for as long as its power source is able to sustain it.

    Kinetic Attack

    For a particularly spicy brand of adversary, the logical followup to an ISR operation is a kinetic attack. Depending on the target, these could be simple ramming attacks (against small aircraft or humans), mounted explosive attacks (often referred to as “suicide drones” or “flying IEDs”), or mounted weapon systems (such as air-to-ground missiles or deployable bombs).

    While ISR operations don’t always lead to kinetic attacks (and luckily, for civilians they almost never do), ISR and kinetic attacks are directly linked and naturally lead to one another. Reconnaissance must have already been carried out to locate and identify a target prior to the attack, and another round of reconnaissance is required to confirm if the desired effect has been achieved by an attack.

    Signals Intelligence

    SIGINT is effectively the electronic equivalent of ISR. A modern sUAS has a lot in common with the platonic ideal of a SIGINT device: autonomous, computerized, radio-equipped, and small in size. The payload capacity and ability of rotorcraft to land and shut off motors can allow us to carry software-defined radio packages into unexpected locations to intercept and log radio traffic or make disruptive transmissions.

    Our options for remote SIGINT are as varied as the types of signals our adversary could emit. Unencrypted radio communications can be recorded and retransmitted (or written to local storage to be carried) home, schedules of the adversary’s agents could be deduced from regular encrypted radio traffic, or types and locations of equipment on the adversary’s property could be identified by RF fingerprinting (Riyaz et al., 2018). Other attacks may be possible if we have access to the right equipment, such as laser microphones, and previous CNE missions may have given us the ability to remotely deploy more exotic strategies such as the Mic-E-Mouse side channel attack (Fakih et al., 2025).

    Computer Network Exploitation

    If SIGINT is the electronic equivalent of ISR, CNE is the electronic equivalent of a kinetic attack. Rather than listening to or manipulating an adversary’s radio emissions or communications, the goal of CNE is to make active intrusions on their networks or computer systems. Similarly to how ISR both precedes and follows a kinetic attack, SIGINT operations often precede and follow an instance of CNE. We must have a basic understanding of the target network’s topology before it can be exploited, and an exploited network is often opened up for more invasive SIGINT techniques.

    Imagine a scenario where we’re tasked with exfiltrating a file from a server located on an adversary’s property. The server is on the same network the adversary’s agents use for laptops and tablets, but is segregated from the internet connection itself. One potential attack vector would be to equip a small rotorcraft with an ESP32 or similar SoC, land it within range, and remotely carry out an evil twin/evil portal attack to capture credentials for the network. Once the credentials are captured, the payload can connect to the network and either attempt to exfiltrate the data directly or scan the network for vulnerabilities that could be exploited after returning with another payload.

    Intrusion Countermeasures

    Since the 80s, science fiction writers have been infatuated with a concept they call “intrusion countermeasure electronics,” or “ICE,” which is a hypothetical program that acts as a sort of digital guard dog that detects and fights hackers in cyberspace. While we don’t have anything like that today (though AI agents may be able to bring that fiction to life soon enough), we do have a major factor on our side: these hostile uses tend to be symmetrical. If an ISR platform can see us, we have the opportunity to see it. If a drone can attack us, we can attack it. A drone carrying out a SIGINT operation must transmit or physically exfiltrate collected data, during which it can be interacted with. A drone exploiting our networks and computers is itself a computer with networking capabilities that can also be exploited.

    That being said, it’s worth noting that if you aren’t the military or police, physically interfering with a drone or exploiting its computer systems are very, very illegal (18 U.S.C. § 32, 2006; Van Buren v. United States, 2021). If you are however, it’s also worth noting that many low cost consumer and hobby-grade drones are made with cheap electronic components that lack the security measures of professional, police, or government-grade platforms. Many easily-implemented strategies such as deauthentication attacks, replay attacks, or unencrypted control link hijacking are likely to be effective in this case.

    Of course, if we’re looking for countermeasures with a little more visual spectacle, I mentioned some more destructive options in this previous post.

    References

    Destruction of Aircraft or Aircraft Facilities, 18 U.S.C. § 32 (2006).

    Fakih, M., Dharmaji, R., Mahmoud, Y., Bouzidi, H., & Faruque, M. A. A. (2025). Invisible ears at your fingertips: Acoustic eavesdropping via mouse sensors. arXiv. https://doi.org/10.48550/arXiv.2509.13581

    Riyaz, S., Sankhe, K., Ioannidis, S., & Chowdhury, K. (2018). Deep learning convolutional neural networks for radio identification. IEEE Communications Magazine, 56(9), 146–152. https://doi.org/10.1109/MCOM.2018.1800153

    Van Buren v. United States, 593 U.S. 374 (2021). https://www.oyez.org/cases/2020/19-783

  • Detect and Avoid Basics

    Detect and Avoid Basics

    Featured image: Simulation of Naples, Italy airspace in IVAO’s Aurora ATC simulator. Image by Giovanni Rizza, Wikimedia Commons. CC BY-SA 4.0.

    A side note about terminology: “Detect and Avoid” (DAA) and “Sense and Avoid” (SAA) are commonly used to refer to the same process. I have elected to use “Detect and Avoid” to conform to the terminology used by the FAA in their proposed Part 108, which will contain most of the regulatory basis for beyond visual line of sight (BVLOS) DAA procedures. When discussing evasive maneuvers, I have elected to use the term “sense” or “maneuver sense” to refer to a selected maneuver and its direction as with a traffic alert and collision avoidance system (TCAS) Resolution Advisory.

    As we begin to rely more on more on large UAS platforms with hybrid electric powerplants and multiple hours of endurance, it becomes more and more difficult to carry out missions without going BVLOS. How then, when we don’t have visual contact with the UAV, do we make sure it isn’t abruptly filled with bloodlust and attempting to ram unsuspecting SR22s? That task falls to the Detect and Avoid (DAA) system.

    Detect and Avoid: Primary Functions

    There are two primary functions of a DAA system. The first is to ensure that the UAV “remains well clear” (RWC) of other aircraft and, depending on UAS design, potentially wildlife or structures. This is similar to the Part 91 requirement for a manned aircraft to not pass over, under, or in front of another aircraft unless “well clear.” To determine if the UAV is “well clear” of other aircraft, the DAA system will create an imaginary RWC area around it and its course. If an object (or a track that object is following) enters the RWC area, the DAA system will determine how the UAV can maneuver to avoid it.

    An example of an RWC action, not to scale. The RWC area is indicated in blue. The projected tracks of the UAV and target aircraft will bring the target within the RWC area, so the UAV proactively avoids it. In this example the UAV has chosen a maneuver sense that causes it to pass behind the right-of-way aircraft.

    The second primary function of the DAA system is collision avoidance (CA). Within the RWC area is a second, smaller CA area. If an object enters this CA area, the DAA system will consider it a separation loss and assume the UAV is in immediate danger. During a CA situation, the DAA system will take more drastic measures to regain separation, possibly including unauthorized airspace incursions.

    An example of a CA situation, not to scale. The RWC area is indicated in blue, and the CA area is indicated in red. The UAV has failed to detect the target aircraft in time to remain well clear of it, and must now take immediate evasive action to avoid a collision.

    Detect and Avoid: By the Numbers

    Everything discussed so far is relatively intuitive. If something is too close to us, or will be too close to us in the future, we get out of its way. Unfortunately, computers generally don’t have very good intuition, so we have to break the process down into specific tasks to be evaluated programmatically by different components of the UAS. We can use any number of DAA frameworks to keep the process human-readable, two of which are illustrated below.

    Two examples of DAA frameworks. The DoD-style Sense and Avoid Blueprint breaks the encounter down into more granular tasks while the Conflict Detection and Resolution framework is focused on the broad processes involved.

    For the remainder of this post I will be focused on the observe – orient – decide – act encounter timeline due to its higher granularity, but much of the information also applies to the CDR framework.

    Observation

    The first and most obvious set of tasks is to observe our surroundings. Observation is ideally carried out at all times, and the rest of our tasks are predicated on data collected during this step. There are three components to observation:

    • Detect targets: Before we can do anything else, we must know that something is nearby.
    • Track targets: Once a target is detected, we must build a track using repeated observations of its position and speed in order to predict where it will be in the future.
    • Fuse target tracks: Ideally the UAS has multiple sensors with which to detect an object, but that means an object will generate multiple tracks. To get an accurate picture of our surroundings we must detect when multiple tracks are created by the same object and fuse them into one highly detailed target.

    Sensors

    Our UAS (hopefully) lacks eyes, so the process of observation is instead carried out by sensor systems, both onboard and remote. Sensors are broadly separated into cooperative and non-cooperative, then further into active and passive. Cooperative sensors receive information from sensors onboard the target itself, while non-cooperative sensors do not. Active sensors are emissive and must direct energy towards a target to detect it, while passive sensors only receive energy from the target and environment (Barnhart et al., 2021; Nichols et al., 2019).

    Cooperative sensors available to us vary depending on the type of target we expect to detect. Manned aircraft are often equipped with transponders that can be interrogated and ADS-B equipment that we can receive automatic broadcasts from. A UAS operating under Part 107 can’t be equipped with either of those (14 CFR § 107.52 et seq.), but can instead be equipped with a Remote ID broadcast system which serves some of the same functions (14 CFR § 89).

    At time of writing, a UAS operating BVLOS under the FAA’s proposed Part 108 would be required to yield right-of-way to “electronically conspicuous” aircraft (14 CFR Proposed § 108.195). This means that the UAS must have both the ability to detect Universal Access Transceivers (both ADS-B and handheld equivalents) and the ability to fuse tracks generated from them with those generated by its other sensors. A Part 108-compliant UAS must be able to communicate with an automated data service provider (ADSP) described in the proposed Part 146, which also acts as a type of cooperative pseudo-sensor (14 CFR Proposed §§ 108.190, 146).

    Non-cooperative sensors available to us include passive optical and thermal sensors (cameras, if you will), laser-based active rangefinding systems such as lidar, radar, and active or passive acoustic sensors (Barnhart et al., 2021; Sabins & Ellis, 2020).

    SensorEnergy CharacteristicsNetworkingNotes
    VIS CamerasPassive, visible light-basedNon-cooperativeIncludes standard and low-light amplification cameras
    IR CamerasPassive, infrared light-basedNon-cooperativeIncludes NIR/MWIR/LWIR, commonly implemented in FLIR systems
    LaserActive, UV or infrared-basedNon-cooperativeIncludes LIDAR systems and traditional laser rangefinders
    RadarActive, RF-basedNon-cooperativeIncludes onboard X-band radars, ground-based ASR, etc.
    AcousticActive or passive, sound-basedNon-cooperativeIncludes standard acoustic sensors and ultrasonic sensors
    TransponderActive, radio-basedCooperativeSystems that must be interrogated e.g. Mode C/S
    TransceiversPassive, radio-basedCooperativeAutomatic transceivers e.g. UAT/ADS-B, Remote ID
    At-a-glance comparison of sensor types theoretically available to us. Note. Data contained in the table is from Introduction to Unmanned Aircraft Systems (3rd ed.) by Barnhart et al. (2021); Unmanned Aircraft Systems in the Cyber Domain: Protecting USA’s Advanced Air Assets (2nd ed.) by Nichols et al. (2019); Remote Sensing: Principles, Interpretation, and Applications (4th ed.) by Sabins et al. (2020).

    Orientation

    Once we know that a target exists, it’s helpful to know what we’re dealing with. Orientation is the process through which we identify targets and determine what level of threat they pose. There are three components to orientation:

    • Identify target: Before we can prioritize targets we must determine what characteristics they exhibit and potentially what they are.
    • Evaluate threat: If we do nothing about this target, what will happen to us? Will we pass each other harmlessly, risk violating our RWC area, or risk a collision?
    • Prioritize threat: Which of the targets we’re currently tracking are the most dangerous? Which can be safely ignored? More significant threats must be handled before less significant ones.

    Target identification is important for deciding what level of threat the target poses and what type of evasion strategy will be used later. Information previously gathered by our sensors can be re-used by either traditional algorithms or machine learning models to determine what class of target is being tracked (Opromolla & Fasano, 2021; Said Hamed Alzadjali et al., 2024). For this purpose, target characteristics such as size, speed, emissions, presence of cooperative sensors, ADSP data, etc. can allow us to determine the target class (UAS, manned aircraft, bird, structure, terrain) with some degree of confidence (Barnhart et al., 2021).

    It is worth noting that, with the advent of ADSP and similar systems proposed by the IEEE and RTCA, in the future most UAS are expected to be cooperative vehicles which automatically broadcast their state and intentions (Mandapaka et al., 2025). In this case determining the target class of another UAS becomes trivial, and in all likelihood we will already have automatically adjusted our flight plan to avoid them long before a DAA operation becomes necessary.

    Decision

    Anyone familiar with TCAS is already familiar with the decision tasks, as TCAS carries out a similar process of declaring intent and selecting an evasive maneuver sense for manned aircraft. Now that we know that one or more threats are present and which are the most threatening, we can decide what to do about them. There are two components to the decision:

    • Declare intent: The DAA system informs the pilot or flight controller that a course correction or evasive maneuver is needed.
    • Maneuver sense: The DAA system determines the appropriate maneuver and sense to correct the problem and informs the pilot, flight controller, and/or ATC.

    In order to make an appropriate decision, the DAA system requires information about the target, the UAV itself, and the airspace it’s operating in. The DAA system must decide how to avoid the target while staying within its allowed airspace if possible, avoiding crossing senses (e.g. climbing or descending across the target’s altitude) if possible, and complying with yielding requirements if possible. In some situations, the DAA system may also be required to coordinate with ATC before proceeding to the final tasks.

    At time of writing, a UAS operating BVLOS under the proposed Part 108 has different maneuvering options depending on the airspace, separation, and whether or not the target is “electronically conspicuous.” Certain airspaces require the UAS to yield right-of-way to all manned aircraft, others only require it to yield to “electronically conspicuous” manned aircraft. At certain distances the UAS may only be allowed to pass behind the target, while at others the DAA may also be able to make a TCAS-style over/under sense decision (14 CFR Proposed § 108.195).

    Action

    Once we have a plan of attack, it’s time to act. There are three components to the action:

    • Command: The pilot or flight controller commands the UAS to initiate the maneuver, either through manipulating the controls or by automated process.
    • Execute: The UAV itself executes the maneuver within the specified window and the DAA system verifies its effect.
    • Return to course: The UAS decides how to return to course its original course or become established on an amended course.

    During these tasks, it’s critical that the DAA system continue to track all involved targets and re-evaluate the threats they pose. An unexpected maneuver from a target being avoided may require a different maneuver to counteract or may escalate a RWC to a CA. Similarly, a successful CA will likely provoke a follow-up RWC action, preventing the UAS from returning to course until it’s entirely clear of the target.

    At time of writing, a UAS operating BVLOS under the proposed Part 108 must inform the FAA and all other airspace users of its successful deconfliction by way of ADSP (14 CFR Proposed § 108.190).

    References

    Barnhart, R. K., Marshall, D. M., & Shappee, E. (Eds). (2021). Introduction to unmanned aircraft systems (3rd ed). CRC Press.

    Nichols, R., Mumm, H., Lonstein, W., Ryan, J., Carter, C., & Hood, J. P. (2019). Unmanned Aircraft Systems in the cyber domain: Protecting USA’s advanced air assets (2nd ed). New Prairie Press.

    Federal Aviation Administration. (2025). Normalizing unmanned aircraft systems beyond visual line of sight operations, 14 CFR Proposed §§ 108, 146. https://www.federalregister.gov/documents/2025/08/07/2025-14992/normalizing-unmanned-aircraft-systems-beyond-visual-line-of-sight-operations

    Mandapaka, J. S., McCorkendale, L., McCorkendale, Z., Kidane, M. F., Smith, N., Hawkins, S., & Namuduri, K. (2025). Collision avoidance strategies for advanced air mobility using UAS-to-UAS communications. Drone Systems and Applications, 13, 1–14. https://doi.org/10.1139/dsa-2024-0044

    Opromolla, R., & Fasano, G. (2021). Visual-based obstacle detection and tracking, and conflict detection for small UAS sense and avoid. Aerospace Science and Technology, 119. https://doi.org/10.1016/j.ast.2021.107167

    Remote Identification of Unmanned Aircraft, 14 CFR § 89. (2026). https://www.ecfr.gov/on/2026-03-10/title-14/chapter-I/subchapter-F/part-89

    Sabins, F., & Ellis, J. (2020). Remote sensing: Principles, interpretation, and applications (4th ed). Waveland Press.

    Said Hamed Alzadjali, N., Balasubaramainan, S., Savarimuthu, C., & Rances, E. (2024). A Deep Learning Framework for Real-Time Bird Detection and Its Implications for Reducing Bird Strike Incidents. Sensors, 24 (17). https://doi.org/10.3390/s24175455

    Small Unmanned Aircraft Systems, 14 CFR § 107. (2026). https://www.ecfr.gov/on/2026-03-10/title-14/chapter-I/subchapter-F/part-107

  • UAS Risk Assessment

    UAS Risk Assessment

    Featured image: Kazhan bomber hexacopter, 25th Airborne Brigade of Ukraine. Photo by Віталій Павленко, АрміяІнформ, is licensed under CC BY 4.0.

    If you’re reading this, by now you have likely heard about the LOCUST laser weapon system and its remarkable ability to acquire, engage, and destroy helium balloons. A sufficiently sarcastic reader might suggest that the same system could be used as a defense against a hostile or unidentified sUAS, and they might be right. Unfortunately, as anyone with as passing interest in security knows, there is no single solution that covers all possible threats. You can’t just lock your front door and call your home secure. An adversary could know their way around a lockpick, execute a RollJam-style replay attack against your garage door opener, or simply smash your window in with a brick. You could implement completely effective countermeasures against all of these, but a house with only blast-proof, interior-bolted doors and windows is expensive and frustrating to live in.

    An example of a risk assessment procedure. Each step is repeated concurrently with the steps following it. For example, while identifying vulnerabilities or their impacts you will likely discover new potential threat events or sources.

    Every individual and organization has a certain level of risk tolerance, but very few are actually aware of how much risk they’re taking on at any given moment. In order to determine what level of risk we’re being exposed to during normal operations, it’s helpful to conduct a risk assessment (NIST, 2012). During the risk assessment we can identify potential threat actors, threat events those actors could initiate, and vulnerabilities in our organization or procedures. Once we determine the likelihood of these threat events from occurring and potential impacts our vulnerabilities being exploited could have, we can create a ballpark negative expected value for each threat event. This is our assumed risk, and if the sum of our assumed risks is greater than our risk tolerance we must either take steps to mitigate them or exit the space entirely, forfeiting the benefits of operating within it.

    A side note about terminology: threat and risk are separate but related concepts. A threat is an actor or situation that has the potential to negatively impact a mission or entity, while risk is the negative impact adjusted for the probability of it occurring (Lawrenson et al., 2023).

    Threat Sources

    So what would potential threat sources for the UAS industry look like? The most obvious threat source at the national level is a foreign military, but many threat sources are domestic. Criminals (organized or otherwise), corporate adversaries, or the general public can be domestic threats external to an organization. Depending on the scope of our risk assessment, we may want to consider disgruntled, inadequately trained, or negligent members of our own organization as threat sources.

    Some threat sources are purely environmental or technological. Wildlife or meteorological phenomena can be considered threat sources, and we’re especially vulnerable to these in aviation. Unintentional hardware or software failures can also be considered threat sources, as many people are reminded every time they board a 737 MAX. While environmental and technological threat sources don’t act with purpose, their potential impacts can still be devastating and shouldn’t be discounted.

    Threat Events and Vulnerabilities

    While I’ve already written an entire post about UAS threat events and vulnerabilities, there are some that were out of scope of that post. UAS are uniquely vulnerable (and suited to) being targeted by (and carrying out) kinetic events due to their size, relatively low cost and non-reliance on onboard crews. Threat sources may attempt to physically intercept our drones, carry improvised explosive devices on their own drones, or purposely impact aircraft or vehicles. Depending on the scope of our risk assessment we may or may not need to consider all forms of kinetic events. Civilian organizations, for example, are unlikely to have adversaries dropping IEDs on their property, but are also unlikely to incur much additional cost by simply considering the possibility.

    Environmental or technological threat sources may cause kinetic-like threat events (for example, a bird or lightning striking an aircraft), but may also cause more unusual threat events. Meteorological conditions or hardware degradation can cause battery fires or motor failures. Software issues can cause loss of navigation or control. These events, however unlikely, must be accounted for during the assessment. If a battery failure causes a drone to suffer an in flight breakup and debris falls on people or vehicles, our organization will be held liable.

    Expected Values and Examples

    The last step of the assessment is to determine the odds of sources causing each event and the impact of each exploited vulnerability, then combine them to determine our risk. I like to use the term “expected value” here because it allows us to consider the benefits of avoiding an exploit as well, which lets us consider that a potentially risky action with a large payoff might still be within our risk tolerance. It’s not necessary to do this mathematically, but it can be helpful to do so for the sake of illustration.

    Consider a scientific organization like the OTUS Project, who carries out tornado intercepts with drones to gather sensor data. An obvious threat source is the tornado and an associated threat event is wind damage causing a loss of control, which we can assume happens 25% of the time. We can consider a potential vulnerability, that a destroyed drone will also destroy the sensor and its data, and say it will cost us around $10,000 to replace. The odds of our threat event (0.25) times the impact of our exploited vulnerability (-$10,000) is our assumed risk for each mission: -$2,500.

    Between this and my previous post we’ve collected a good sample of threats and vulnerabilities. So what, in my opinion, poses the greatest risk? Ultimately, I believe the greatest risks are posed by electronic attack and network security threats. The state of UAS cybersecurity is improving, but is still far behind the standards set by the rest of the cybersecurity industry. On top of that, the potential impacts of cybersecurity events are staggering; grounded or destroyed fleets, theft of sensitive telemetry or intellectual property, and even kinetic attacks on allied forces or civilians. News coverage of drone-based combat in Ukraine may have put the kinetic threat of drones in the forefront of the discourse, but I believe it’s ultimately electronic and cyber threats that have the unique combination of high impact and high likelihood to give them the top spot in my risk rankings.

    Countermeasures

    We can make decisions based solely on risks, but that’s not our only option. As I mentioned earlier, if the sum of our risks exceeds our risk tolerance and we don’t want to exit the space entirely, it’s time to start talking mitigation. I already mentioned some potential countermeasures to electronic, cyber, and supply chain attacks in my previous post, so today I’ll focus on kinetic threats.

    Until now I’ve been assuming that the goal is to protect our drones and the data and physical payloads they carry. But what if the goal is to protect us from drones? Unfortunately, we have a microcosm of the evolution of UAS and counter UAS operations playing out in Europe over the last few years that we can draw inspiration from.

    Drone in the Nets” by mikecogh is licensed under CC BY-SA 2.0.

    There’s no shortage of photos of so-called “cope cages” on armored vehicles and fishing nets draped over key supply lines, which have proven themselves to be effective low-tech solutions to protecting specific targets. GPS and radio control link jamming have been proven to counter low tech drones, but now have their own countermeasures in the form of fiber optic control links and AI-based visual navigation systems.

    A Ukrainian fiber optic drone designed to defeat control link jamming. Photo by Олени Худякової, АрміяІнформ, is licensed under CC BY 4.0.

    Of course, drones are themselves vulnerable to kinetic threats. Old reliable airburst munitions, the bane of low and slow aircraft before the advent of BVR missile systems, have made a comeback as a defense against drone swarms. Drones can engage their own kind thanks to AI-based counter-drone drones. And of course, as you may have guessed from the beginning of this post, the LOCUST laser weapon system can in fact also be used to destroy drones.

    References

    Lawrenson, A., Rodrigues, C. C., Malmquist, S., Greaves, M., Braithwaite, G., & Cusick, S. K. (2023). Commercial aviation safety (7th ed.). McGraw-Hill.

    National Institute of Standards and Technology. (2012). Guide for conducting risk assessments. Special Publication 800-30r1. https://doi.org/10.6028/NIST.SP.800-30r1

  • UAS Threat Modeling

    UAS Threat Modeling

    When asked to imagine a potential vulnerability of any piece of robotics, most people will immediately envision a scene straight out of a cyberpunk novel where a hacker in a black coat and mirrorshades remotely seizes control of the system with a few keystrokes, turning it on its owner. While reality isn’t usually so dramatic (or stylish), UAS operators do have a number of potential threats that they must be aware of.

    Attacks on the Control Link

    Most UAS operate within the bounds of some type of control link. Depending on mission scope and the capabilities of the system, an individual drone may either be operated directly through a control link, or operate primarily autonomously but respect control link inputs in case of emergency. Both setups provide a potential attack vector that can be exploited by an adversary.

    Small black electronic component with an antenna
    Example of a common ExpressLRS receiver. This device translates radio signals (2.4 GHz in this case) into pulse width modulation signals used to directly control electric motors or LEDs, such as those on a fixed wing drone. These are simple, cheap, insecure, and common on low cost or home-built fixed wing drones.

    The most obvious goal of an attack on the control link is to seize control of the drone, either as simple theft or in order to use its onboard sensors or weapons against personnel that may be unaware that the drone is compromised. While this scenario is unlikely, it’s not impossible. For example, researchers have demonstrated that ExpressLRS, a common control link solution for low cost drones (including ones used in the ongoing conflict in Ukraine), was vulnerable to being overridden and hijacked by a dedicated attacker with relatively common equipment (NCC Group, 2022).

    The second most obvious goal of an attack on the control link is to “mission kill” it by removing an operator’s ability to direct it manually. Most drones are programmed to return to a predetermined location or make an emergency landing if they don’t receive packets from their ground control station for a certain amount of time, and lower cost systems may instead simply continue on their present courses indefinitely or cut power to motors and fall to the ground. This goal can be accomplished by much more simple methods of attack such as radio jamming, which has its own set of countermeasures such as automatic frequency/band hopping or hardwired fiber optic transmission systems seen in Ukraine (Doodle Labs, 2024).

    Attacks on Sensors

    There are two broad categories of sensors used by UAS platforms currently on the market: those used for navigation, and mission-specific payloads (Sabins & Ellis, 2020). While mission-specific payloads may be vulnerable to attack (e.g. by pointing a powerful laser at a camera or lidar sensor), attacks on navigational sensors are much larger threats.

    As drones typically lack radio navigation systems and have few if any traditional instruments onboard, they rely heavily on some combination of GNSS, magnetometers, cameras, lidar, and ultrasound for navigation. These sensors are all vulnerable to external interference and disabling them can easily cripple the drone. Some, but not all, of these sensors have built-in mitigation strategies, such as OSNMA or Chimera for GNSS systems (Rusu-Casandra & Lohan, 2025).

    Example of a common Remote ID broadcast module. This device provides GPS and magnetometric data to the drone while broadcasting a unique identifier and the drone’s location. This component allows a drone to be easily tracked and provides a single point of failure while operating BVLOS.

    Sensor attacks can be executed on their own (e.g. jamming a camera feed or lidar sensor to cause a crash), or they can be executed in tandem with other attack vectors (e.g. spoofing a GPS location while disrupting the control link, causing the drone to “return home” to a location the adversary controls). A more sophisticated adversary is less likely to rely entirely on a sensor attack, and sensor attacks vary wildly in both threat level and barrier to entry.

    Attacks on the Network

    Many drones have some form of WiFi or cellular modem onboard. These may be used for programming and maintenance tasks (e.g. changing settings on a flight controller or retrieving saved video) or as a transmission method for the control link. A network connection offers huge benefits, but also increases the UAS’ attack surface considerably.

    Network-based control links may be vulnerable to a deauthentication attack, which exploits malformed packet handling or standard commands to cause the target drone to terminate its own control link (Branco et al., 2024). They may also be vulnerable to a replay attack, where an adversary captures packets containing authentication data and retransmits them to send conflicting instructions to the flight controller.

    Network connections for other components vary in application. The Bluetooth or WiFi connection of a Remote ID broadcast module is useful to an adversary who wants to identify or track the drone or its operator. The WiFi connection of a flight controller may allow an adversary to get a shell on the device, giving them direct access to control surfaces, settings, and firmware of the drone.

    Any type of network connection that relies on infrastructure the operator doesn’t control, such as a control link operating over a cellular connection, is further vulnerable to more traditional network attacks such as denial of service or man-in-the-middle attacks.

    Network attacks are extreme threats to any UAS vulnerable to them, and can often be executed with common hardware and freely available software.

    Attacks on the Supply Chain

    One final note: a more abstract threat that an operator should still be at least aware of is the supply chain attack. The same way that you must assume that a system an adversary has physical access to is compromised, you must assume that equipment provided by an adversary is also compromised.

    Unfortunately, you can’t always tell who the adversary is until they make their move. This is the nature of so-called “advanced persistent threats,” which may silently compromise systems well in advance of the event that triggers detection (referred to as “dwell time”). In a supply chain attack, an actor can use their access to manufacturers or shipping services to compromise a system, potentially undetectably, before it ever reaches the end user.

    While supply chain attacks are difficult to detect and mitigate, an operator can consider their risks when deciding what equipment to use for what tasks. The more sensitive the payload or information onboard the drone is, the more resistant the drone should be to supply chain attacks. Drones used for sensitive tasks may require NDAA-compliant components, more trusted vendors, or (in extreme cases) documentation and certification processes for each component.

    References

    Branco, B., Silva, J. S., & Correia, M. (2024). D3S: A drone security scoring system. Information 15(12), 811. https://doi.org/10.3390/info15120811

    Doodle Labs. (2024). SENSE – Interference avoidance configuration. Doodle Labs technical library. https://techlibrary.doodlelabs.com/sense

    NCC Group. (2022). Technical advisory: ExpressLRS vulnerabilities allow for hijack of control link. https://www.nccgroup.com/research-blog/technical-advisory-expresslrs-vulnerabilities-allow-for-hijack-of-control-link/

    Rusu-Casandra, A., & Lohan, E. S. (2025). Experimental assessment of OSNMA-enabled GNSS positioning in interference-affected RF environments. Sensors 25(3), 729. https://doi.org/10.3390/s25030729

    Sabins, F., & Ellis, J. (2020). Remote sensing: Principles, interpretation, and applications. Waveland Press.