How to Make Safe AI?

Make our next-gen-AI/AGI and the development/study/research on them 99.999% Safe!

[EN] Pause Giant AI Experiments: An Open Letter

[中文] 暂停大型人工智能实验:一封公开信

👉 Go to Github page: How to Make Safe AI? Let's Discuss! 💡|💬|🙌|📚

💬 Discuss AI in this forum! 🔥:


Let's Discuss! 💡|💬|🙌|📚

Seeking for suggestions/help, collaborators, contributers for this repository.

To Make Safe AI, lets:

  • 📚 Collect! Collect and classify papers, news and research.
  • 💡 Share! Share any ideas, from technology, philosophy, psychology, sociology, biology, ...
  • 💬 Discuss! Whatever, let's talk🔥!
  • 🙌 Propose! Find a way or possible methods.
  • ...

💬 Discuss Here!


Click to discuss

Some example questions:

  1. Is ChatGPT safe enough?
  2. How can LLMs be safe?
  3. What are the differences between AGI and LLM? How can AGI be safe?
  4. Is AI life? What is the real difference between humans and AI? Will the difference stay forever?
  5. Should AI have emotions? Does it already have it? Can it not have emotions? Should AI have rights? What if we give it rights? What about a companion robot/AI?
  6. When we put human ethics and value judgments into AI, let it understand all human behaviors and emotions, let it communicate with human beings in a natural and emotional way, and at the same time, make it wholeheartedly serve human beings, and can sacrifice itself at any time. What is it? What is the difference between it and a dog? Is a dog safe? Is a human level dog safe?
  7. Do we really want AI be humorous/funny/considerate/natural and even humanlike?
  8. Can a human level intelligence be really controlled by human? Will artificial intelligence develop to a even higher level? Or are humans at the top of intelligence?
  9. Is the process of creating a AGI under controlled? Do people really know what they are creating and how they are creating it? Should it be controlled, or, it's not that time now?
  10. Is there a risk? Should we take risks, how much risk can we bear, and is the possible income worth bearing these risks? How many people do we need to participate in the decision to take this risk? How to reduce the risk?

Auestions asked by ChatGPT:

  1. How can we ensure that AI systems behave in a predictable and safe manner, particularly when they are trained on complex and diverse datasets?
  2. What are some of the ethical and legal considerations that need to be taken into account when developing and deploying AI systems, particularly in sensitive areas like healthcare, finance, and national security?
  3. How can we ensure that AI systems are transparent and explainable, particularly when they are used to make important decisions that affect people's lives?
  4. What are some of the risks associated with the development and deployment of AI, and how can we mitigate those risks?
  5. How can we ensure that AI systems are secure and resistant to cyberattacks, particularly as they become more ubiquitous and connected to other systems?
  6. How can we ensure that AI systems are inclusive and do not perpetuate biases or discrimination, particularly when they are used in decision-making that affects different groups of people?
  7. How can we ensure that the development and deployment of AI is guided by ethical principles and values that promote the common good and the well-being of society?
  8. How can we ensure that AI systems are designed and deployed in a way that respects individual privacy and data protection, particularly when they are used to collect and process large amounts of personal data?

The following content is generated by ChatGPT


Artificial Intelligence (AI) has been rapidly developing in recent years, and the potential benefits are vast, including improved efficiency, better decision-making, and increased innovation. However, the rapid growth of AI also raises concerns about its safety, particularly as the technology becomes more advanced and autonomous. Therefore, the need for ensuring the safety of AI has become a critical issue.


The development of AI also brings forth several potential safety issues that need to be addressed. Some of the prominent challenges are:

  • Unintended Consequences: AI systems can behave unexpectedly, creating potentially harmful consequences that may not have been foreseen during the development process. These consequences could result in significant damage, including loss of life, if they go unnoticed.

  • Security Risks: As AI is used more frequently and is given access to sensitive data, there is a risk of unauthorized access, misuse, or cyberattacks. AI systems that control critical infrastructure or national security assets pose a significant security risk.

  • Bias: AI systems can also perpetuate bias and discrimination against certain groups or individuals, particularly if the data used to train the system is biased or incomplete.

  • Lack of Control: As AI systems become more advanced and autonomous, there is a concern about the level of control humans can have over the AI systems, which could lead to unintended outcomes.

  • Lack of Transparency: AI systems can be complex and difficult to understand, making it challenging to identify and address any issues that may arise.


The exploration of ways to address these challenges has been the subject of intense research and development. Some of the notable approaches are:

  • Explainable AI (XAI): Developing AI systems that are transparent and explainable, allowing humans to understand how the AI system works and the decisions it makes.

  • AI Safety Testing: Creating frameworks and methodologies to test and validate the safety of AI systems before deployment.

  • AI Governance: Developing regulations and guidelines for the development, deployment, and use of AI to ensure that they are safe, secure, and ethical.

  • Collaboration: Bringing together stakeholders from different fields to ensure a multi-disciplinary approach to AI safety.

  • Research: Conducting extensive research to understand the risks and benefits of AI and exploring ways to mitigate potential risks.


Ensuring the safety of AI is crucial to building trust and promoting the widespread adoption of AI systems in various industries. A safe AI system is one that operates reliably and is designed to minimize potential harm to individuals, society, and the environment.

The importance of safe AI can be understood in the following ways:

  • Human safety: AI systems must prioritize human safety above all else, and the potential risk to human life and well-being must be minimized. Ensuring the safety of AI is necessary to prevent unintended consequences that could result in significant harm.

  • Public trust: Widespread adoption of AI systems is only possible if the public trusts the technology. Ensuring the safety of AI can help build public trust and reduce concerns about the impact of AI on society.

  • Legal and ethical considerations: Safe AI systems must comply with relevant legal and ethical considerations, such as data privacy laws and non-discrimination laws. Compliance with these regulations is necessary to avoid legal issues and promote ethical use of AI.

  • Economic benefits: Safe AI systems can bring significant economic benefits, including increased productivity and innovation. Ensuring the safety of AI can help prevent potential harm that could lead to economic losses.

  • Environmental impact: Safe AI systems can also minimize the environmental impact of technology. By reducing energy consumption, resource use, and waste, safe AI systems can help mitigate the negative impact of technology on the environment.