Introduction
Many facets of contemporary life, including entertainment, business, and health care, impact big data and machine learning. Google is aware of the ailments and symptoms people are searching for, just as Google is aware of the movies and television shows people choose to watch on Netflix, Amazon, and Google. All of this information may be utilized to generate comprehensive individual profiles, which can be highly useful for behavioral accepting and targeting and forecasting healthcare trends.1 There is much hope that using artificial intelligence (AI) will significantly advance all facets of healthcare, from diagnosis to therapy.
Numerous examples of tasks where Artificial intelligence algorithms are carrying out on par with or better than persons include the examination of medical images and the relationship of signs and biomarkers from electronic medical records (EMRs) with the diagnosis and prognosis of the illness. Healthcare facilities are constantly growing, and several nations need more healthcare professionals, particularly doctors. Healthcare organizations are also struggling to stay up with entirely the latest technical advancements and consumers' high potentials for service levels and results similar to what they have come to expect from consumer goods like those from Apple and Amazon.2
In addition to opening up options for on-demand healthcare facilities utilizing health tracing applications and examine platforms, developments in wireless expertise and mobile phones have also made remote interactions, which are accessible anywhere and anytime, a new method of delivering healthcare. Such services are essential for impoverished areas and places needing more experts and assist in lowering expenses and avoiding unnecessary clinic exposure to infectious diseases.2 Tele health technology is especially significant in emerging nations, where the healthcare substructure can be developed to match recent demands, and the healthcare system is expanding.
A research stream can be assessed using bibliometric techniques that can add objectivity and lessen researcher bias, as indicated by Zupic and Ater. Because of this, bibliometric methods are becoming more and more popular among academics as a trustworthy and impersonal way of conducting research analyses. Virtual reality technology is used in rehabilitation medicine, according to Huang et al.'s bibliometric analysis. The authors state that improving and restoring functional ability and quality of life for people with physical impairments or disabilities is the main objective of rehabilitation. Text mining in medical research is the focus of Hao et al. As previously said, text mining uses a computer to automatically extract information from many text sources, revealing new, previously undiscovered information.3
Similarly to this, the studies by dos Santos et al. concentrate on using machine learning (ML) and data mining approaches to address public health issues. According to the findings of this study, public health can be summed up as the practice of disease prevention, wellness promotion, and life extension. Big data is a common "buzzword" in the business and scientific sector, according to Liao et al. denoting a significant amount of data. They have access to a sizable amount of data in the medical sector (also known as medical big data).
A systematic review on the application of machine learning (ML) to enhance the care of elderly patients was recently published by Choudhury et al. highlighting eligible research, primarily in the areas of psychological problems and visual ailments. The objective of Tran et al.'s study is the global advancement of AI medical research. Their bibliometric study identifies trends and subjects about AI techniques and applications.4
According to Connelly et al.'s study, the number of procedures utilizing robot assistance has significantly increased recently. Their bibliometric research shows how robotic-assisted surgery has become more common in a variety of medical specialties, including urological, colorectal, cardiothoracic, orthopedic, maxillofacial, and neurosurgical applications. Additionally, Guo et al.'s bibliometric analysis offers a thorough examination of AI papers up to December 2019. The report focuses on practical AI health applications and provides academics with an understanding of how algorithms might benefit medical professionals.
Technological Advancements
In the last ten years, there have been a lot of technical advancements in the fields of data science and artificial intelligence. The recent trend of AI publicity is distinct from earlier ones, even though investigation in AI for several applications has been underway for numerous periods. The rapid development of Artificial intelligence tools and technologies, including healthcare, has been made possible by an ideal triangulation of faster computer processing speeds, more significant facts gathering data banks, and a vast skill pool for AI. The degree of AI technology, its uptake, and its effects on society are all expected to change dramatically due to this. In particular, deep learning (DL) has changed how users view Artificial intelligence tools currently and is primarily to blame for the modern hype around Artificial intelligence uses. Several businesses are leaders in this field, including Google's Deep Mind and IBM Watson. These businesses have demonstrated that their Artificial intelligence can outperform beings at various jobs and games, such as go and Chess. Numerous applications connected to healthcare are now being employed for Google's Deep Mind and IBM Watson.5 Although IBM Watson is being used to research enhanced cancer treatment and modeling, drug development, and diabetes management, it has not yet demonstrated therapeutic usefulness to patients.
How can AI be defines?
Artificial intelligence is the ability displayed by machines to execute a variety of activities with the aid of sentiment analysis and Natural Language Processing (NLP). With this technology, computers are now capable of understanding the information they are given and using it to carry out various commercial functions. ML and deep learning, which are subclasses of Artificial Intelligence, each has distinct roles to play in the training of machines.
Current AI interest
Although AI is not new, recent years have seen remarkable technological advancements. This has been made possible partly by advances in processing power and the massive amounts of digital data being produced right now. There is a lot of public and commercial interest in researching a wide range of AI applications. In its 2017 Industrial Strategy, the UK government stated its goal to position the country as a global leader in AI and data technology.6 Major IT firms are investing in creating AI for medical research and healthcare, including Google, Microsoft, and IBM. Additionally, the number of AI startup businesses has been continuously rising. Numerous companies are established in the UK, some of which were founded in association with institutions of higher learning and medical facilities there.
Role of AI in healthcare
In order to create precise and efficient technologies that will help treat people who are suffering from these disorders and, ideally, find a cure, AI is being applied in radiology and chronic diseases like cancer. AI has significant advantages over traditional analytics and clinical decision-making processes. The accuracy of the systems increases when training data is understood by AI algorithms. People may now gain insights about treatment variability, care practices, diagnoses, and patient outcomes that were previously unachievable.6
AI applications in healthcare
It is usually accepted that AI technologies will support and improve human labor rather than outright replace that of doctors and other healthcare professionals. AI is prepared to assist medical staff with various duties, including organizational workflow, clinical records, patient outreach, and specialist support like pictures examination, medical device automation, and patient monitoring.
Diverse viewpoints exist on the best uses of AI in the healthcare industry. The following divisions will address several of the most critical uses of Artificial Intelligence in healthcare, including those directly related to healthcare and that part of the healthcare value chain, like drug research and ambient assisted living (AAL).7
AI in biomedical information processing
Natural language processing has advanced for use in biomedical uses. The field of biological question answering (BioQA) aims to quickly and accurately identify solutions to user-formulated queries from a pool of official papers and databases. As a result, it is reasonable to anticipate that natural language processing approaches will look for illuminating responses. Medical data assimilation, contrast, and conflict resolution are three significant activities that might predominate for biomedical data gathered from many sources over an extended period. These have traditionally been labor-intensive, time-consuming, and unpleasant duties carried out by people. AI can do these jobs with outcomes that are as precise as what a specialized inspector can accomplish, which increases efficiency and accuracy. Additionally, natural language processing of clinical account information is required to relieve people from the onerous burden of tracking chronological occurrences while instantaneously retaining constructions and reasoning.7
AI in biomedical research
Recent research in this area has focused on subjects like tumor-suppressor devices, extracting information about protein-protein interactions, creating genetic associations of the human genome to help translate genomic findings into medical procedures, and so forth. Additionally, with a semantic graph-based AI procedure, biomedical examiners may complete the compound process of summarizing the writings on a particular subject of interest. AI can also assist biomedical researchers in ranking relevant material when the volume of research articles exceeds human comprehension. This enables scientists to develop and test precise scientific hypotheses, a crucial component of biomedical research.8 The computational modeling assistant (CMA), an intelligent agent, can aid biomedical investigators in creating "executable" simulation replicas from the theoretical representations.
Genetics-based solutions
A significant portion of the world's population is anticipated to receive whole genome sequencing over the next ten years, either at birth or as adults. This genome sequencing will provide a fantastic tool for precision medicine and is expected to use 100–150 GB of data. Information on phenotypes and genomics is currently being combined. A health tech startup called Deep Genomics is attempting to integrate the enormous genetic dataset and EMRs related to illness indicators by finding patterns in both. To create customized genetic medications, this business utilizes these connections to find therapeutic objectives, either current therapeutic targets or future healing possibilities. They use AI in each drug development phase, comprising identifying targets, lead optimization, toxicity evaluation, and creative experimental enterprise.
Drug discovery and development
From the discovery of molecular targets through the approval and marketing of a therapeutic product, drug research, and development is a tremendously protracted, expensive, and composite procedure that frequently takes more than ten years. On top of that, it is challenging to consistently find pharmacological compounds that are much better than what is presently on the market, and regulatory barriers are growing. As a result, developing new medicinal items is complex and inefficient, and they are expensive once they reach the market.8
For the learning systems to read the drug molecules and related characteristics employed in the in silico models, they must be converted into a vector format. The data utilized in this study typically consists of molecular descriptors (such as physicochemical qualities) and molecular fingerprints (molecular structure), as well as networks for convolutional neural networks (CNNs) and simplified molecular input line entry system (SMILES) strings.9
AI and medical visualization
It might not be easy to interpret data that arrives in the shape of a picture or a video. To develop the capacity to recognize medical events, experts must train for many years. In addition, they must actively study new material as fresh research and information become available. However, there need to be more subject matter specialists, and the need is constantly rising. As a result, a novel strategy is required, and AI holds promise as the instrument to close this demand gap.9
Computer vision for diagnosis and surgery
Since statistical signal processing has been the essential foundation for computer vision, artificial neural networks are currently used more frequently as the preferred learning approach. Here, computer visualization algorithms for categorizing pictures of grazes in the skin and other soft tissue are developed using deep learning (DL). Video data can deliver a better value depending on resolution over time because it is predicted to hold 25 times as much data as high-resolution diagnostic pictures like CT. Although it is still early, video analysis offers much possible for medical choice assistance. One significant use of AI and computer visualization in surgical tools is to improve specific procedures and abilities like suturing and knot-tying. In various surgical operations, such as animal bowel anastomosis, Johns Hopkins University's bright tissue autonomous robot (STAR) has proven that it can outperform human specialists. Although a completely independent robotic physician is still a dream for the future, academics are interested in applying AI to improve several elements of surgery.10
Augmented reality and virtual reality in healthcare space
Every stage of a healthcare system may combine augmented and virtual reality (AR and VR). Medical students can use these methods at the beginning of their study and by surgeons with experience and those in training for a particular specialty. On the other hand, patients may both benefit from and suffer from this technology.
Education and exploration
Play is one of the most significant components of our life since humans are visual beings. The most crucial method we learned as kids were via play. Future doctors can be thought of as artists who create a treatment. To be successful in a constantly changing career, these people need to possess specific talents. Many topics are introduced to students at the beginning of medical school without ever having them experience them in real life.
In light of this, game-like technologies like VR and AR may improve and enrich the educational process for future medical and health-related disciplines. Without ever having to contact or involve actual patients at an early stage or without ever having to do an autopsy on a natural body, medical students might be taught unique and complex surgical methods or learn about anatomy using AR. These students will undoubtedly meet with actual patients in their future employment, but the objective is to start the training process early and save training costs later. Human interaction in the medical industry should be promoted, although it is only sometimes possible or essential while a person is completing a training program.10
Personal health records have been doctor-focused and frequently lack patient-related features. However, a patient-centric personal health record should be introduced to encourage self-management and enhance patient outcomes. The objective is to provide patients enough flexibility to control their diseases while giving professionals more time to handle more urgent and essential duties.
Health monitoring and wearable
People have relied on doctors for knowledge about their bodies for millennia, and this practice is still used to some level today. Wearable technology, which is still very new, is altering this. Future technology called wearable health devices (WHDs) enables continuous monitoring of specific vital indicators under varied circumstances. Their application adaptability, which allows users to measure their activity while jogging, meditating, or even submerging, was crucial to their early acceptance and success. By enabling people to examine the data and manage their health, the aim is to give people a sense of control over their health. Said WHDs promote personal empowerment.10
AI limitations
Artificial intelligence (AI) relies on digital data; therefore, limitations in the quantity and caliber of data limit AI's potential. Additionally, vast and complicated data sets demand a lot of computational resources to analyze. The degree to which patients and physicians are at ease exchanging personal health information electronically is a matter of debate. Humans possess qualities like compassion that AI systems might not be able to emulate. Complex judgments and skills, such as contextual knowledge and the capacity to interpret social cues, are frequently required in clinical practice, but AI is currently unable to reproduce these tasks. Also up for contention is the idea of tacit knowledge, which is the knowledge that cannot be taught. It has been disputed that AI will be able to exhibit autonomy because this quality is fundamental to human nature and cannot, therefore, be possessed by a computer.
Ethical and social issues
Many of the ethical and societal concerns posed by AI are shared by concerns about data usage, automation, technology dependence in general, and problems with the use of assistive technology and "telehealth."
Reliability and safety
When AI is utilized in healthcare to manage equipment, give treatments, or make decisions, reliability and safety are crucial concerns. 11 AI could make mistakes, and if such errors are hard to spot or have unintended consequences, they might have dire repercussions. In a 2015 clinical experiment, for instance, an AI app was used to forecast which patients were more likely to experience complications from pneumonia and so needed to be hospitalized. Due to this app's incapacity to take contextual information into account, clinicians were incorrectly directed to send patients with asthma home. The effectiveness of AI-powered symptom checker applications has been questioned. For instance, it has been discovered that app suggestions may need to be more cautious, thus resulting in a rise in the demand for unnecessary testing and treatments.12
Transparency and accountability
Determining the fundamental logic underlying AI's outputs can take time and effort. Some artificial intelligence is proprietary and purposefully kept secret, while some are just too sophisticated for a human to comprehend. Because machine learning technologies constantly modify their settings and rules as they learn, they can be highly opaque. This makes it challenging to verify AI system outputs and spot biases or inaccuracies in the data. According to the new EU General Data Protection Regulation (GDPR), data subjects have the right to be free from decisions made purely based on automated processing that has substantial legal or other consequences. Furthermore, it stipulates that disclosures to people about the use of their data should contain "meaningful information regarding the logic involved, the relevance and the anticipated implications of such processing for the data subject, as well as the existence of automated decision-making."11
Data bias and equity
Although AI applications may minimize human error and prejudice, they may also reflect and reinforce biases in the training data. AI's potential to cause discrimination in ways that may be covert or that may not be consistent with legally protected traits, such as gender, race, handicap, and age has prompted concerns. The House of Lords Select Committee on AI has issued a warning that datasets used to train AI systems are frequently unrepresentative of the general population and may, as a result, provide biased results.12 The Committee also discovered that prejudices might be ingrained in algorithms, mirroring the ideas and preconceptions of AI engineers. AI may perform less effectively when data are harder to get by, gather, or represent digitally. This may have an impact on individuals with uncommon medical disorders such as populations of African, Asian, and other minorities.
Practically speaking, for AI systems to be successfully applied in healthcare, both patients and medical staff must be able to trust them. IBM's Watson Oncology, a tool for cancer detection, allegedly had its clinical trials suspended in several clinics because physicians outside the US did not trust its suggestions and thought the model represented an American-specific approach to cancer therapy.13
Security and privacy for data
Many people would consider the data used by AI applications in healthcare to be private and sensitive. The law controls these. However, other types of data, such as social media activity and internet search history, which are not explicitly about health state, might be utilized to find out information about the user's health status. According to the Nuffield Council on Bioethics, initiatives employing data that raise privacy issues should go above and beyond legal requirements to consider the public's expectations for how their data will be used. Artificial intelligence (AI) might be used to identify cyber-attacks and safeguard hospital IT systems. However, AI systems may be compromised to obtain access to private information or be flooded with fictitious or biased data in undetectable ways.13
Malicious Use of AI
While AI has the potential to be beneficial, it also has the potential to be harmful. For instance, there are concerns that AI may be utilized for screening or clandestine monitoring. A person's health might be revealed without awareness by AI technologies that analyze motor behavior (such as how they write on a keyboard) and movement patterns found by tracking devices. AI might be utilized to conduct cyber-attacks on a larger scale and at a cheaper expense. Governments, academics, and engineers have been urged to consider the dual-use nature of AI and make preparations for the potential malevolent use of AI technology.13
AI in near and the remote
AI will have a significant impact on future healthcare options. It is the main capacity behind the development of precision medicine, universally acknowledged as a critically needed healthcare improvement. It takes the form of machine learning. Although early attempts at making suggestions for diagnosis and therapy have been difficult, AI will eventually become proficient in that field as well. Not determining whether the technologies will be capable enough to be beneficial, but rather guaranteeing their acceptance in routine clinical practice, is the biggest hurdle for AI in various healthcare sectors. Regulators must approve AI systems before they can be widely adopted.13 Furthermore, it is increasingly apparent that AI systems will not substantially replace human physicians in patient care but rather support them. Human doctors may eventually gravitate towards jobs and work structures that use particularly human abilities like empathy, persuasion, and big-picture integration. Over time, maybe the only healthcare professionals who will put their careers in danger are those who reject working with AI.
Conclusion
In healthcare and research, artificial intelligence (AI) technologies are being utilized or tested for various tasks, such as disease diagnosis, chronic condition management, service delivery, and drug development. The quality of the health data that is now accessible and the fact that AI does not yet have some human qualities like compassion may be barriers to its use in addressing pressing health issues. Numerous ethical and societal concerns are posed by AI, many of which also concern the use of data and healthcare technology in general. Ensuring that AI is produced and utilized in a way that is transparent and consistent with the public interest while motivating and driving innovation in the industry will be a fundamental issue for future regulation of AI technology.