Declaration on the ethics of brain–computer interfaces and augment intelligence

Progresses and risks

Brain–computer interfaces (BCIs) are a transdisciplinary field of, but not limited to, brain science and artificial intelligence. It can be divided into invasive, semi-invasive, and non-invasive BCIs. Invasive (e.g., micro-electrodes) and semi-invasive (e.g., ECoG) BCIs are mainly aimed at the medical field, to solve problems in brain diseases, and cognitive dysfunction for brain injury patients. Non-invasive BCIs (e.g., EEG, MEG, PET, fMRI, and fNIRS) are currently aimed primarily at the general consumer market to augment and expand human cognitive function.

BCIs are a disruptive technology that repairs, augments, expands, and extends human intelligence. It is one of the important means to achieve augment intelligence. The purpose of augment intelligence is to augment human intelligence and cognitive ability as an assistive technology, not to replace it. One of the original intentions of the BCIs and augment intelligence technology is to help patients with motor nerve dysfunction [1]. Today’s BCI technology is not only used to treat injuries and diseases [2], but it can also be used for game control [3], to help disabled people control wheelchairs [4], to help and improve learning [5], and to be used in the military field [6].

BCI technology has a bright future in general, especially in medical treatment and expanding human cognition. While its development is in a very infancy stage, and many potential applications based on the current BCIs will be with risks out of control. For example, deep-brain stimulation (DBS) is a BCI technology used to treat Parkinson’s diseases, severe obsessive–compulsive disorder, and severe depression, and the potential risks to use the current DBS are still with a long list, including wound infection, paresthesia, seizures, intracerebral hemorrhage, hemiplegia, cerebral infarction, and iatrogenic harms [7, 8].

Although many recent progress seems very encouraging [9], once BCIs are mature enough for various mind-reading tasks and used in various scenarios, it is nearly impossible for individuals to keep their thoughts private, which will lead to great challenge for personal privacy and human agency. In addition, how should we interact and treat people who use BCIs to extend their memory and learning or extend their physical motor skills? Is it fair that empowered people will benefit more than those who did not use the technology? These ethical issues have attracted many attentions, and are just the tip of the iceberg.

The declaration

We advocate the development of human-oriented, sustainable BCIs and augment intelligence to ensure promotion of human flourishing. Based on existing ethical considerations of BCIs [2, 10,11,12,13], we issue the following declaration:

  • Privacy protection When conducting scientific experiments and technical services of BCIs and augment intelligence, attentions should be paid to the boundaries of brain data collection and analysis. If user-related disease information, potential health information, and other privacy-related information (such as the information that users do not want to share, but obtained through BCIs) are obtained, they should be processed reasonably. Informed consent should be obtained for obtaining and using user-related private information, and appropriate mechanisms for user authorization revocation should be provided.

  • Identity and responsibility recognition BCIs may affect people’s perception on the self and personal responsibility, including moral responsibility and legal responsibility (for example, when people use BCI-related equipment, due to insufficient training, absence of concentration [14], hacker intrusion, or equipment failure with the BCIs, they may cause harm to the external environment and other humans. These may not be the subjective wishes of users). Therefore, when applying BCI technologies, especially invasive BCIs, to the human body, it is necessary to pay close attention to the changes in users’ recognition of self, identity and responsibility, and prevent negative impact on affecting human identity and responsibility recognition.

  • Autonomy of decision making BCIs and augment intelligence devices should not be used to replace and weaken human decision-making ability when they have not fully proved that they can ensure to keep the risks below the human level. The autonomy of human decision making and judgment should be respected and maintained.

  • Safety and security BCIs can cause infection, headaches, and other injuries to humans due to device implantation or interfaces with the devices [13]. It can also be exploited due to technical loopholes or design defects of its equipment, and is prone to failure. Therefore, key techniques need to be open and transparent as necessary to reduce potential risks. The stability, safety, security, adaptability, and reliability of BCIs devices need to be continuously improved to avoid design flaws which may cause negative side effects to other human beings and the environment. Reasonable safety and security mechanisms should be gradually designed and implemented to avoid execution of possible negative intent implementation.

  • Informed consent BCIs and augment intelligent product and service providers need to clearly inform users of the potential risks of related products and services and clearly obtain user (or authorized representatives) consent. Users (or authorized representatives) have the right to suspend the use of related products and services, and related service providers (including medical) should follow the users' wishes to make the appropriate adjustments.

  • Accountability It should be required that the design, development, use and deployment phases of BCIs and augment intelligence are accountable. Key technologies should be open as necessary, and the relevant part of the systems should have necessary levels of transparency, explainability, predictability. In addition, the traceability of faults and risks should be ensured.

  • Fairness BCIs and augment intelligence have the potential to enable users to gain stronger cognitive abilities. It is possible to gain a clear advantage in competition with ordinary people who do not have the financial possibilities to use these technologies. Developed areas and high-income people are more likely to obtain BCIs and augment intelligent technologies than ordinary people in backward areas to augment their social superiority, which may widen the gap between the rich and the poor, leading to unfairness in social activities such as employment and education. Therefore, it is necessary to pay attention to the potential fairness controversy when efforts are put to obtain benefits from enhancing existing human intelligence. Attentions need to be paid to avoid bringing unfairness through introducing BCIs and augment intelligence in the area of education, work, resource allocation and many others.

  • Avoiding bias The thoughts and behaviors of those who use BCIs and augment intelligence to repair and augment human intelligence may be different from those who have not applied these technologies. However, no bias should be allowed to against people who use BCIs and augment intelligence. Relevant users should be fully respected. Their dignity should not be compromised, and all their due rights should be ensured.

  • Moderate use Many aspects of BCIs and augment intelligence are still in the very early stages of development, especially that the maturity of related equipment and algorithms still need to be improved. In addition, its long-term impact on human and society is still unclear. Therefore, the use of BCIs and augment intelligence products and services should follow the principle of moderate use. It is recommended that they should be used after careful evaluation, and should be used when necessary, so that the negative impact on humans could be minimized.

  • Avoid misuse One should avoid applying related products and services without an adequate understanding of the potential negative effects of BCIs and augment intelligence products and services. One should also avoid improper application without understanding the scope of application of related products and services.

  • Prohibition of abuse and malicious use It is prohibited to abuse BCIs and augment intelligence products and services that violate the dignity and fundamental human rights. It is prohibited to abuse related technologies to undermine social stability, trust, justice, unity and peace. It is prohibited to maliciously apply related technologies and services to engage in illegal activities or seek improper benefits. It is prohibited to use loopholes in related technologies and services to engage in illegal activities or seek improper gains. Users should not use BCIs and augment intelligence to avoid their own responsibilities.

  • Multi-stakeholder governance The ethical issues of BCIs require profound discussions, debates and long-term attentions from scholars in Brain and Neuroscience, Medical Science, Artificial Intelligence, Material Science, Electrical Engineering, Philosophy, Ethics, Sociology, and many other fields. Research institutions, industries, governments and the general public need to be involved. Various countries and intergovernmental organizations should gradually establish BCIs and augment intelligence governance frameworks and mechanisms in a democratic manner, and conduct practices and evaluations, so as to continuously support the grounding and implementation of relevant ethical principles.

Discussion

The formulation of ethical declarations and principles is only the starting point for the responsible development of BCIs and augment intelligence. What is more essential is to implement the declarations and principles from technical and social perspectives, and establish an effective evaluation mechanism to ensure that the declarations and principles are effectively implemented as expected [15].

BCIs and augment intelligence systems are not like traditional tools, such as a knife, a tool itself with very limited safety mechanisms and risk precautions (For example, a knife cannot identify potential harm for human and other living being and assist the avoidance of them). The service providers of BCIs and augment intelligence systems should be more accountable and take more responsibilities. Well-designed BCIs and ugment intelligence systems can have monitoring components which can be used to monitor and help avoid specific types of potential harms to others. Hence, if the users of BCIs and augment intelligence systems intent to take actions to do harm to others (even to themselves), they should be gradually designed and implemented to be with more safety and risk precaution mechanisms to avoid the execution of possible negative intent implementation, such as do harm to others.

In the absence of ethical considerations, the development and use of BCIs and augment intelligence will greatly reduce the public’s trust and acceptance of innovative technologies [16], and have potential risks that will have irreversible negative impacts on human society. Embedding ethical considerations in the full life cycle of BCIs and augment intelligence products and services, and continuously develop and get benefit from multi-stakeholder governance can ensure the sustainable development of BCIs and augment intelligence, and ultimately contribute to human flourishing and sustainable development of human society.

References

  1. 1.

    Lebedev, M.: Brain-machine interfaces: an overview. Transl. Neurosci. 5(1), 99–110 (2014). https://doi.org/10.2478/s13380-014-0212-z

    Article  Google Scholar 

  2. 2.

    Wolpe, P.R.: Ethical and social challenges of brain-computer interfaces. AMA J. Ethics 9(2), 128–131 (2007). https://doi.org/10.1001/virtualmentor.2007.9.2.msoc1-0702

    Article  Google Scholar 

  3. 3.

    al Conocimiento, V.: Video games controlled by thoughts. OpenMind (2020). https://www.bbvaopenmind.com/en/technology/innovation/video-games-controlled-by-thoughts/. Accessed 17 Nov 2020

  4. 4.

    Tang, J., Liu, Y., Hu, D., Zhou, Z.: Towards BCI-actuated smart wheelchair system. Biomed. Eng. Online (2018). https://doi.org/10.1186/s12938-018-0545-x

    Article  Google Scholar 

  5. 5.

    Spüler, M., Krumpe, T., Walter, C., Scharinger, C., Rosenstiel, W., Gerjets, P.: Brain-computer interfaces for educational applications. In: Buder, J., Hesse, F.W. (eds.) Informational environments: effects of use, effective designs, pp. 177–201. Springer International Publishing, Cham (2017)

    Google Scholar 

  6. 6.

    The US military is trying to read minds. MIT Technology Review (2020). https://www.technologyreview.com/2019/10/16/132269/us-military-super-soldiers-control-drones-brain-computer-interfaces/. Accessed 17 Nov 2020

  7. 7.

    Fenoy, A.J., Simpson, R.K.: Risks of common complications in deep brain stimulation surgery: management and avoidance: clinical article. J. Neurosurg. 120(1), 132–139 (2014). https://doi.org/10.3171/2013.10.JNS131225

    Article  Google Scholar 

  8. 8.

    Gilbert, F., Lancelot, M.: Incoming ethical issues for deep brain stimulation: when long-term treatment leads to a ‘new form of the disease.’ J. Med. Ethics (2020). https://doi.org/10.1136/medethics-2019-106052

    Article  Google Scholar 

  9. 9.

    Elon Musk’s Neuralink is neuroscience theater. MIT Technology Review (2020). https://www.technologyreview.com/2020/08/30/1007786/elon-musks-neuralink-demo-update-neuroscience-theater/. Accessed 1 Dec 2020

  10. 10.

    Yuste, R., et al.: Four ethical priorities for neurotechnologies and AI. Nat. News 551(7679), 159 (2017)

    Article  Google Scholar 

  11. 11.

    Coin, A., Mulder, M., Dubljević, V.: Ethical aspects of BCI technology: what is the state of the art? Philosophies (2020). https://doi.org/10.3390/philosophies5040031

    Article  Google Scholar 

  12. 12.

    Coin, A., Dubljević, V.: The authenticity of machine-augmented human intelligence: therapy, enhancement, and the extended mind. Neuroethics (2020). https://doi.org/10.1007/s12152-020-09453-5

    Article  Google Scholar 

  13. 13.

    Drew, L.: The ethics of brain–computer interfaces. Nature (2019). https://doi.org/10.1038/d41586-019-02214-2

    Article  Google Scholar 

  14. 14.

    Kögel, J., Jox, R.J., Friedrich, O.: What is it like to use a BCI?—insights from an interview study with brain-computer interface users. BMC Med. Ethics (2020). https://doi.org/10.1186/s12910-019-0442-2

    Article  Google Scholar 

  15. 15.

    O’Neill, O.: From principles to practice: normativity and judgement in ethics and politics. Cambridge University Press, Cambridge (2018)

    Google Scholar 

  16. 16.

    O’Neill, O.: Autonomy and trust in bioethics. Cambridge University Press, Cambridge (2002)

    Google Scholar 

Download references

Funding

This study is finantially supported by Ministry of Science and Technology China (Grant No. 2020AAA0105304, 2020AAA0104305), Beijing Municipality of Science and Technology.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Yi Zeng.

Ethics declarations

Conflict of interest

On behalf of all the authors, the corresponding author states that there is no conflict of interest.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zeng, Y., Sun, K. & Lu, E. Declaration on the ethics of brain–computer interfaces and augment intelligence. AI Ethics (2021). https://doi.org/10.1007/s43681-020-00036-x

Download citation