By Prabhu Kottapu, Director of Data Science & Analytics, Springfield Clinic
Artificial Intelligence (AI) is now playing a significant role in the applications we use every day or the websites we go to. YouTube, Facebook, and even Google use some form of artificial intelligence. AI offers great benefits to improve the technology we use and at the same time displays unintended uses or results that can lead to problems, such as ethical issues.
Before going into the bad, it’s good to understand why AI is used in the first place. If AI can cause problems, then why even use it? Artificial intelligence is almost necessary for today’s technological advances. It’s harder to improve how specific technical applications and platforms run and return results without using some form of AI to help us. AI is a learning algorithm that can help us understand current information and predict future information searched or needed.
Let’s consider the example of Google. Google uses an algorithm to show ads based on previous search criteria. This can be in the form of not only company websites thatusers have visited, but it may be items that were merely searched for. For example, we put in the Google search bar “scrubs.” An array of companies, online stores, and pictures will be displayed to you. It probably won’t end there because the next time you return to Google, you may have ads for scrubs appear, even if you searched for something completely different. The same could be said of YouTube, where ads for scrubs may pop up during a video.
It is best to have a diverse team when creating AI algorithms and keep the project guidelines and group remembering three words while creating and launching the algorithm: fair, accountable, and transparent.
Given these examples, AI integration is seen to be very positive for various businesses that thrive on online searching and shopping. However, not everything is perfect, and unfortunately, unintended and even unintended negative consequences can come out of AI usage.
An example of a largely negative ramification from errors in an AI system happened in Arkansas. Buggy software caused health benefits to be altered for hundreds of people, including many with severe disabilities. One example was a woman named Tammy Dobbs with cerebral palsy who needed an aide to do daily tasks around the home but suddenly lost several hours a week to receive an aide because of the previously mentioned buggy AI system. Blame was passed around between the one who developed the AI and the government, more explicitly blaming government policies. This example shows the type of issues that can arise with AI when the implementation is buggy, and how the use of AI can make it easier not to feel responsible.
That is just one example of how not adequately written or monitored AI can be skewed. The example above listed a problem that negatively affected a group of people. It will not always be harmful in that way, but AI errors can lead to ethical issues. What causes these types of ethical issues in AI? One piece of it is how the algorithm was written. If the team who wrote the algorithm was not very diverse, they will most likely all miss the same problems in the algorithm that may be easily seen from someone with a different ethnicity or background.
An example of this could be adding a block button to a new social networking application. Suppose any of the staff creating the application never had adverse consequences from previous social networking apps that needed this option. In that case, they may not even think of it, versus having one or a few people on the team who have had that problem and believe that is an excellent option to have for those who want to use the social network safely and securely.
It is hard to cover the whole topic of ethics in AI, as the subject is just too vast to cover fully, but hopefully, some of these points help give some food for thought as we dive more into healthcare.
Healthcare stands for a wide range of businesses, from hospitals to private practices, to specialized doctor’s offices, and so on. AI is considered a more behind-the-screen type, so how can AI help places like these work more with the public? The answer to that is quite simple. Healthcare often has computer systems to save patient information, from basic information to their diagnosis and the treatment decision. AI can be used to keep track of this information. It can also be used to understand better the number of supplies and the equipment that is needed based on the number of cases of various illnesses, such as flu numbers or patients with diabetes, from month to month, or even year to year.
AI is a beautiful tool, but just as in the examples presented earlier, ethical issues can arise. Assuming diagnoses based on age and gender, for instance, is a common ethical problem. If an older man comes in with chest pain, it might be assumed that he could be having a heart attack, but it only ends up being a solid case of heartburn from the burger he ate earlier that day.
This may seem like a silly example, but ethical considerations need to be seriously thought through; such significant and catastrophic implications don’t negatively affect innocent people on both narrow and wide scales, such as the example given with Arkansas. It is best to have a diverse team when creating AI algorithms and keep the project guidelines and group remembering three words while creating and launching the algorithm: fair, accountable, and transparent. If these ideas can be maintained and a comprehensive, diverse team is put in place, the creation and monitoring of ethical AI may improve, with fewer issues during usage.