API’s and AI Self-Driving Cars – Info AI
By Lance Eliot, the AI Trends Insider
API’s have become the darling of the high-tech software world. There are conferences devoted to the topic of API’s. Non-tech business-oriented magazines and journals gush about the importance of API’s. Anyone that makes a software package nowadays is nearly forced into providing API’s.
It’s the rise of the Application Programming Interface (API).
Rather than being something magical, it’s actually just the providing of a portal into a software system that otherwise might be difficult to communicate with. One major advantage of a portal is that it can allow various extensions that can add-on to the software system and go beyond what the original software system itself can accomplish. You can also interface to the original software system and allow it to become interconnected with other software. And, you can avoid potentially having to redevelop the wheel, so to speak, by leveraging whatever capabilities the original software system has. Some would say it also allows the software to be at a higher level of abstraction.
I’ve written extensively about API’s for AI systems, which you can read about in Chapter 3 and Chapter 4 of my book “AI Guardian Angel Bots for Deep AI Trustworthiness: Practical Advances in Artificial Intelligence (AI) and Machine Learning” (available on Amazon at https://www.amazon.com/Guardian-Angel-Bots-Deep-Trustworthiness/dp/0692800611)
By providing a healthy set of API’s, the developers of the original software system are able to encourage the emergence of a third-party add-on ecosystem. This in turn will help make the original software system more popular as others connect to it and rely upon it. Eventually, with some luck and skill, the original software system becomes immersed in so many other areas of life that it becomes undeniably necessary. What might have begun as a small effort can snowball into a widespread and highly known cornerstone for an entire marketplace radiating outward from the original software core.
With great promise often comes great peril. In the case of API’s, there is a chance that the use of the API’s can boomerang on a company that made the original software system. These portals can be used as intended and yet cause undesirable results, along with them being used for unintended nefarious reasons and also causing undesirable results.
Let’s consider the case of an API used as intended, but has caused what some perceive as an undesirable result. This particular example involving Gmail has been in the news recently and led to some untoward attention and concerns.
Google allows for API’s to connect to Gmail. This would seem to be handy since it would allow other software developers to connect their software with Gmail. This can provide handy new capabilities for Gmail that otherwise would have never existed. Meanwhile, those software developers that might have written something that would have never seen the light of day, might be able to piggyback onto the popularity of Gmail and hit themselves a homerun.
When an app that is able to connect to Gmail via the API is first run, it usually asks the user whether they are OK with the app connecting into their Gmail. For many users, they often don’t read the fine print on these kinds of messages and are so eager to get the app that they just say yes to anything that the app displays when being installed. Or, the user might be tempted to read what the conditions are, but it is so lengthy and written in arcane legalese that they don’t do so, and often wonder whether they maybe have given up their first born child by agreeing to the app’s conditions. It’s a combination of the app at times being tricky about explaining what’s up, and the end-user not diligently making sure that they know what they signing up for.
Typically, once the user agrees to the app request at first install, Google then grants to the app that it can access the Gmail of that user. This includes being able to access their emails. The app can potentially read the contents of those emails. It can potentially delete emails. It can potentially send emails on behalf of the user.
In recent widespread news reports, the media caused a stir by finding some companies that read the user’s Gmail emails via AI, doing so to try and figure out what interests the person has and possibly then hit them with ads. In some cases, the emails are even read by humans at the software company, presumably for purposes of being able to gauge how well the AI is doing the reading of the emails. There are also some firms that provide the emails or snapshots of the emails to other third-parties that they have deals with. All in all, it was a bit of a shock to many people that they had provided such access to their “private” email.
I realize that many software developers would blame the user on this – how dumb can you be to go ahead and agree to have your emails accessed and then later on complain that it is taking place? As I mentioned earlier, many users aren’t aware they are doing so, or might be vaguely aware but not really put together two-plus-two and fully understand the implications of what they have allowed to happen. There are some software developers that insist their app is doing a service for the user, and by reading their emails it is helping to target them with things that the person is interested in. That’s a bit of a stretch and for many users the logic doesn’t ring true to them.
You might remember the case of McDonald’s in India and the API that allowed personal information of the McDelivery mobile app to be leaked out. The API connection, normally intended for useful and proper uses, also allowed access to the name, phone numbers, home address, email addresses, and other private info. This was unintended and was an undesirable result.
Hackers Love API’s
As you might guess, hackers love it when there are API’s. It gives them hope that there might be a means to sneakily “break into” a system. I’ve likened this to a fortress that has all sorts of fortified locked doors, which also provides a window that someone with a bit of extra effort can use to get into the fort. Software companies often spend a tremendous amount of effort to try and make their software impervious to security breaches and attacks, and yet then provide an API that exposes aspects that undermine all the rest of their security.
How could that happen? Wouldn’t the API’s get as much scrutiny as the rest of the system in terms of becoming secure? The answer is that no, the API’s often don’t get as much scrutiny. The perception of the company making the software is that the API’s are some kind of techie detail and there’s no need to make sure those are tight. In my experience, most software firms happily provide the API’s in hopes that someone will want to use them, and aren’t nearly as concerned that those that might use them would do so for nefarious reasons.
The API’s are often classified into these three groupings:
API’s that are considered private are usually intended to be used solely by the firm making the software. They setup the API’s for their own convenience. This also though often means that the API’s have a lot of power and can access all sorts of aspects of the software. The firm figures that’s Okay since only the firm itself will presumably be using the API’s. These are often either undocumented and just known amongst those that developed the software, or there is written documentation but it is kept inside the firm and written for those that are insiders.
API’s that are oriented toward partners are intended to be used by allied firms that the firm making the software decides to cut some kind of deal with. Maybe I make a software package that does sales and marketing kinds of functions, while a firm I cut a deal with has a software package for accounting and wants to connect with my package. Once again, the assumption is that only authorized developers with firms that are properly engaged with will use these API’s. The power of the access by these API’s is once again relatively high, but usually less than the private API’s since the original developers often don’t want the third-party to mess-up and do great harm. The documentation often is a bit more elaborated than the private API’s since the partner firm and its developers need to know what the API’s do.
API’s of a public nature are intended to be used by anyone that wants to access the software. These are often very limited in their access capabilities and are considered potential threats to the system. Thus, only the need-to-know aspects are usually made available. The documentation can sometimes be very elaborate and extensive, while in other cases the documentation is slim and the assumption is that people will figure it out on their own or they might share amongst each other as they figure out what the API’s do.
What sometimes happens is that a firm provides say public API’s, and secretly has partner API’s and private API’s. Those developers that opt to use the public API’s become curious about the partner API’s, and either figure them out on their own, or convince a partner to leak details about what they are. If the partner API’s can be used, the next step is to go after the private API’s. It can become a stepwise progression to figuring out the whole set of API’s.
The API’s are often classified into whether they do this:
- Perform an action
- Provide object access
Let’s consider first the performing an action type of API. This allows an app invoking the API to request that the original software perform an action that has been made available via the API. For example, suppose there’s a car that has an electronic on-board system and there’s an API associated with the system. You develop a mobile app that connects to the on-board electronic system and you opt to use the API to invoke an action that the electronic system is capable of performing. Suppose the action consists of honking the horn. Your mobile app then connects to the on-board electronic system and via the API requests the electronic system to honk the horn, which it then dutifully does. Honk, honk.
Or, an app might seek to get access to an object and do so via the API. Suppose the electronic on-board system of the car has data in it that includes the name of the car owner and vehicle info such as the make, model, and number of miles driven. The developers of the electronic on-board system might make available an API that allows for access to the “car owner object” that has that data. You then create an app that connects to the electronic on-board car system and asks via the API to access the car owner object. Once the object is provided, your app then reads the data and now can display it on the screen of the mobile app.
How does this apply to AI self-driving cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. This includes providing API’s, and also involves making use of API’s provided by other allied software systems and components.
If you’ve ever played with the API for the Tesla, you likely know that you can get access to vehicle information, vehicle settings, and the like. You can also invoke actions such as honking the horn, waking up the car, starting the charging of the car, setting the car climate controls, open the trunk, and so on. It’s fun and exciting to create your own mobile app to do these things. That being said, there is already a mobile app provided by Tesla that does these things, so it really doesn’t payoff to create them yourself, other than the personal satisfaction involved and also to explore the nature of API’s on cars, or if you are trying to develop your own third-party app and want to avoid or circumvent the official one.
One of the crucial aspects about API’s for cars is that a car is a life-or-death matter. It’s one thing to provide API’s to an on-board entertainment center, allowing you to write an app that can connect to it and play your favorite songs. Not much of a life or death matter there. On the other hand, if the car provides API’s that allow for actual car controls aspects, it could be something much more dangerous and of concern.
Now that I’ve dragged you through the fundamentals of API’s, it gets us to some important points:
- What kind of API’s, if any, should an AI self-driving car provide?
- If the API’s are provided for an AI self-driving car, how will they be protected from misuse?
- If the API’s are provided for an AI self-driving car, how will they be tested to ensure their veracity?
Some auto makers and tech firms are indicating they will not provide any API’s regarding their AI self-driving cars. That’s their prerogative and we’ll have to see if that’s a good strategy.
Some are making private API’s and trying to be secretive about it. The question always arises, how can you keep it secret and what happens if the secret gets discovered.
Some are making partner API’s and letting their various business partners know about it. This can be handy, though as mentioned earlier it might start other third-parties down the path of figuring out the partner API’s and then next aiming at the private API’s.
Overall, it’s a mixed bag as to the various AI self-driving car firms are opting to deal with API’s.
There’s also another twist to the API topic for AI self-driving cars, namely:
- API’s for Self-Driving Car On-Board System
o API for the AI portion of self-driving car on-board system
o API for non-AI portions of the self-driving on-board system
- API’s for Self-Driving Car Cloud-Based System
o API for AI portion of self-driving car cloud-based system
o API for non-AI portions of the self-driving car cloud-based system
There can be API’s for the on-board systems of the self-driving car, and there can be other API’s for the cloud-based system of the self-driving car. Most AI self-driving cars are going to have OTA (Over The Air) capabilities to interact with a cloud-based system established by the auto maker or tech firm. From a third-party perspective, it would be handy to be able to communicate with the software that’s in the cloud over OTA, in addition to the software that’s on-board the self-driving car.
See my article about the OTA in AI self-driving cars: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/
Some AI developers think it is crazy talk to allow API’s for the self-driving car on-board systems. They believe that the on-board systems are sacrosanct and that nobody but nobody should be poking around in them. Likewise, there are AI developers that believe fervently that there should not be API’s allowed for the cloud-based systems associated with self-driving cars. They perceive that this could lead to incredible troubles, since it might somehow allow someone to do something untoward that could then get spread to all of the self-driving cars that connect to the cloud-based system.
See my article about kits for AI self-driving cars: https://aitrends.com/selfdrivingcars/kits-and-ai-self-driving-cars/
See my article about security and AI self-driving cars: https://aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/
There are some auto makers and tech firms that want to provide API’s, which they do so in hopes that their AI self-driving car will become more popular than their competition. As mentioned earlier, if you can get a thriving third-party ecosystem going, it can greatly help boost your core system and get it to become more enmeshed into the marketplace. Also, if you have only one hundred developers in your company, they can only do so much, but if you can have thousands upon thousands of “developers” that write more software to connect to your system, you have magnified greatly your programming reach.
Innocent API’s Promise Not to Endanger
It is believed by some that the API’s can be provided for aspects that don’t endanger the self-driving car and its occupants, these are so-called innocent API’s.
Suppose for example that the API’s only allow for retrieval of information from the AI and the self-driving car. This presumably would prevent someone from getting the AI self-driving car to perform an undue action. Just make available API’s for object access, but none that allow for performing an action. You can still criticize this and suggest there might be a privacy of information loss due to object access the API’s, but at least it isn’t going to directly commandeer the AI self-driving car.
See my article on privacy of AI self-driving cars: https://aitrends.com/selfdrivingcars/privacy-ai-self-driving-cars/
Another viewpoint is that it is Okay to allow for action performing API’s, but those API’s would be constrained to only narrow and presumably safe actions. Suppose you have an API that allows for honking the horn or for flashing the lights of the car? Those seem innocuous. That being said, I suppose if you honk the horn at the wrong time it can confuse pedestrians and maybe also scare people. Similarly, flashing the lights of the car at the wrong time might be alarming to another human driver of a human driven car. Generally, those don’t seem overly unsafe per se.
There are five core stages of an AI self-driving car while in action:
- Sensors data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls commands issuance
See my article about my framework for AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
If there was an API for the retrieval of information from the sensors data collection and interpretation, this would seem to be innocuous. Indeed, it might allow a clever third-party to develop add-on’s that could do some impressive augmentation to the sensor analysis. You could potentially also grab the data and push it through other machine learning models to try and find better ways to interpret the data. As mentioned before, this could though have privacy and other complications.
For the sensor fusion, suppose you provided an API that would allow for invoking of some subroutines that combine together the radar data and the LIDAR data. This raises all sorts of potential issues. Will this undermine the validity of the system? Will this consume on-board computer resources and possibly starve other mission critical elements? And so on.
The same concerns can be raised about API’s that might invoke actions of the virtual world model, or actions involving the AI action plan updating. The same is the case for toying with the car controls commands issuance. Indeed, any kind of taxing of those components, even if only for data retrieval, would have to be done in such a manner that it does not simultaneously slow down or distract those aspects while they are working.
We must also consider that there can be a difference between what an API was intended to do, and what it actually does. If the auto maker or tech firm was not careful, they could have provided an API that is only supposed to honk the horn, but that if used in some other manner it can suddenly (let’s pretend) change the steering direction of the self-driving car. This shouldn’t happen, of course, and could produce deadly consequences. It wasn’t intended to happen. But inadvertently, while creating the API, the developers made a hole that allowed for this to occur. Some determined hackers might discover that the API has this other purpose.
Now, I am sure that some of you will say that even if there is something untoward in an API capability, all the auto maker or tech firm needs to do is send out an update via the OTA and close off that back-door. Yes, kind of. First, the auto maker or tech firm has to even find out that the back-door exists. Then, they need to create the plug or fix, and test it to make sure it doesn’t produce some other untoward result. They then need to push it out to the self-driving cars via the OTA. The self-driving cars have to have their OTA enabled and download the plug or fix, and install it. All of this can take time, and meanwhile the self-driving cars are “exposed” in terms of someone taking a nefarious advantage of the hole.
The API’s are often setup with authentication that requires any connecting system to have proper authority to access the API. This is a handy and important security feature. That being said, it is not necessarily an impenetrable barrier to still use the API. Remember the story of the app that gains access to your Gmail when you first install the app by getting your permission to do so. Suppose you are installing an app on your smartphone, which you’ve already connected to your AI self-driving car, and you are asked by the app to allow it to access the API’s in your self-driving car. You indicate yes, not knowing that ramifications this could have.
Will AI self-driving car makers provide API’s? Will they provide SDK’s (Software Development Kits)? Will they discourage or encourage so-called “hot wiring” of AI self-driving cars? Perhaps the path will be to limit any such capabilities to only on-board entertainment systems and not at all to any kind of car control or driving task elements.
Without such API’s, presumably the AI self-driving car might be safer, but will it also lose out on the possible bonanza of all sorts of third-party add-ons that will make your AI self-driving car superior to others and become the defacto standard AI self-driving car that everyone wants. We’ll have to wait and see how the API wars plays out.
Copyright 2018 Dr. Lance Eliot
This content is originally posted on AI Trends.
Article Prepared by Ollala Corp