作者:By Alex Scerri | August 5, 2025
As artificial intelligence (AI) continues to shape nearly every aspect of modern life, its integration into aviation safety systems was only a matter of time. From predictive maintenance to fully autonomous aircraft, AI holds the potential to transform aerospace — and it’s gaining traction with both established original equipment manufacturers (OEMs) and innovative startups.
One such company is Zürich-based Daedalean. Founded in 2016, with offices in Phoenix, Arizona, and Riga, Latvia, the company is developing AI-based vision and object recognition systems that support a range of safety-critical functions, including the detection of uncooperative airborne threats, wire detection, GNSS-independent positioning, and landing guidance.
In a move that underscores the growing importance of AI in flight systems, Daedalean was recently acquired by Destinus — a European developer of flight technologies for civil and defense applications — with the transaction expected to close by the end of 2025. The acquisition supports Destinus’ strategy to expand its AI capabilities for unmanned systems across both markets.
In September 2024, Daedalean appointed Bas Gouverneur as CEO to lead its commercial strategy. He joined the company from Swiss aerospace and defense leader RUAG, where he most recently served as chief technology officer. Prior to that, he spent six years at SR Technics Group, leaving as head of design organization.
Vertical spoke with Gouverneur at the 2025 International Paris Airshow, where he outlined Daedalean’s roadmap, its progress to date, and how its technology is set to enhance safety across the aviation industry, including for eVTOL aircraft.
This interview has been edited for length and clarity.
Alex Scerri: The first question an aerospace insider would ask is: how are you going to certify these tools?
Bas Gouverneur: That was the big unsolved debate in aviation. However, we have now developed a method for this — both with the European Union Aviation Safety Agency [EASA] and the Federal Aviation Administration [FAA]. The technologies we are leveraging — machine learning [ML] and artificial vision — are relatively well-known, and many companies are working in this space. However, certifying these systems and ensuring they behave consistently was a huge challenge to overcome. We now have guidance from both regulators to do this.
The key to effective ML is having a large volume of high-quality data to train the system for all the scenarios it needs to cover. We’ve divided the world into different regions, and we’re now flying our aircraft and helicopter across Europe and the U.S. to gather data and test the system in various environments — from deserts to mountainous terrain.
Alex Scerri: Do you need air-ground connectivity for the system to function?
Bas Gouverneur: All the computation is done within the avionics on the aircraft. We collect the dataset to train the algorithm, which is then verified with another distinct and separate dataset, after which the algorithm is ‘locked.’ This is then loaded into the aircraft avionics, so there isn’t a situation where the aircraft is learning on the fly or modifying the predetermined algorithm. In essence, it is a deterministic system.
Of course, the more data we collect, the more it allows us to improve and tweak the algorithm, which can be periodically loaded into the aircraft as an update.
Alex Scerri: Will you be using user-collected data as part of this feedback and improvement process?
Bas Gouverneur: That’s a good question. We’re not there yet, but it’s something we could consider in the future.
Alex Scerri: Your system relies predominantly on optical data. Are you thinking of blending in other sensor inputs?
Bas Gouverneur: For traffic detection, we already integrate ADS-B, FLARM, and other electronic conspicuity aids that may be available. We’ve also started work on introducing infrared sensors to improve the system’s sensitivity in low-light conditions.
Alex Scerri: GNSS interference and spoofing is a topical subject — and satellite signals can also be unreliable in urban environments with tall buildings. How does your system address this serious challenge for low-level navigation?
Bas Gouverneur: That’s exactly the work we’re doing in collaboration with Moog. We integrate our vision-based navigation system into their flight management system [FMS], which is then linked to the autopilot.
We recently flew a 70-nautical-mile [130-kilometer] closed circuit — simulating a search-and-rescue low-altitude search pattern — using Moog’s Bell OH-58, without any GNSS signal. The aircraft relied solely on terrain features to determine its position in space. The flight also used our visual traffic detection [VTD] and completed the mission with optical landing guidance.
Alex Scerri: How does the optical landing guidance work, and would it be able to identify an off-field landing site?
Bas Gouverneur: Those are two different applications — landing guidance and landing site detection. The technology readiness level for the landing guidance system is at a more advanced stage.
For landing guidance, we can detect either a runway or a designated landing site for a helicopter. In the latter case, we use a symbol similar to a QR code that the optical system can recognize to determine the helicopter’s position relative to the landing site and guide it to touchdown. The system can also work with the standard ‘H’ symbol used at most helipads.
The off-field landing capability is less mature at this stage. It works by evaluating the surroundings and detecting relevant objects — such as trees or other obstacles — to identify an area large enough for the helicopter to land safely.
Alex Scerri: You also have wire detection. Do you need a separate system for this?
Bas Gouverneur: We use the same cameras and avionics box, so no additional hardware is needed. We understand how critical this capability is for safe low-level flying, so we’re giving it special attention — training the system to detect wires and poles using dedicated datasets.
Alex Scerri: The system is only as good as the optical data it receives from the cameras. What are you doing in terms of redundancy?
Bas Gouverneur: The system supports multiple cameras, but we’ve observed that each camera continues to function effectively even in adverse conditions — for example, when the lens is partially obscured by bug strikes or atmospheric dust.
Alex Scerri: Is there anything else you’d like to share with the eVTOL community?
Bas Gouverneur: Once we achieve certification, which we’re aiming for by the end of this year for the very first ML-enabled avionics, we’ll be practically writing history. With certification in hand, we can start engaging more OEMs to bring the product to end users.
I believe eVTOL will be a strong market for us because the system is very lightweight — around four kilograms [nine pounds] for the avionics and 4.5 kg [10 lb] total including the camera, with a power draw of just 200 watts. We’re always working to further miniaturize the system as part of ongoing development.
We’re also very active in the unmanned and non-certified military markets, which serve as excellent proving grounds. These real-world environments help us continuously refine our products, and that development in turn feeds back into our certified solutions.