Artificial Intelligence

Digital DNA: Emulating Biological Inheritance in AI

Emulating Biological Inheritance in AI.

I’m not an expert neither on Biology, nor on Artificial Intelligence. I also believe that what is being discussed here can have serious ethical repercussions.

Take this article with a grain of salt.

That being said, I’ve been researching both topics significantly lately and I am more amazed on how the human brain (and body) works, than how 2025 AI does.

Purpose

The purpose of this article is to highlight the similarities between the human brain, and the AI as we know it in 2025 and explore ways of emulating Biological Inheritance in AI (or if you’d like, digital inheritance) and if it makes sense to do so.

Long story short

The proposal of this article is that, in the same way that information are transmitted via DNA throughout generations (something like a lightweight trained AI model that holds the most valuable information of our ancestors, that could be easily used as a “knowledge transfer”), modern AI could utilize a same methodology for:

Re-purpose AI models based on present needs.

Pass down learned experiences instead of starting from scratch; the same way that DNA is doing it in humans. Optimize efficiency by transferring refined algorithms. Evolve dynamically, adapting to new environments without manual retraining. It would make perfect sense for e.g., Chat GPT to deploy a universal model for everybody to use in the beginning (as they currently do), but then as people from e.g., different job titles utilize it, create subsequent models trained on the specific interactions (the new data). These subsequent models would be created by passing down to the v2 “child” the “digital DNA” of the previous v1 parent generation.

After several iterations, you have AI tailored for the targeted group specific needs. All that with minimal effort and without having to go through huge amount of energy consumption which is super expensive both financially and in terms of environmental impact.

A digital inheritance system could cut that down significantly.

This also means that, same as in the real world, AI would be tailored specifically to the current generation’s needs, concerns and, in general, their reality — without going through the expensive re-training.

To put it in a technology context: We could have AI agents for code generation up-to-date, well trained in the latest technologies / patterns etc without having to completely re-train from scratch.

The rest of the Article In the rest of this article, I’ll be listing down some important aspects of how the human brain works, how AI works, and a comparison between them in an attempt to further support the made proposal.

DNA

DNA in humans is a way to store and transfer information, among other important things. The information passed through DNA in humans concern multiple different areas, such as biological traits (e.g., eye color), epigenetic memories (e.g., transgenerational trauma), behavioral and cognitive patterns (e.g., addiction vulnerability), evolutionary and survival skills (e.g., adaptation to climate) etc.

DNA is also a great way to store huge amounts of information in a very small space. Specifically, it is estimated that 1 gram of human DNA can hold approximately 215 petabytes of information.

In a same manner, a DNA-like structure could be utilized to inherit the most important, most valuable information of the previous AI generation to the next one — using minimal storage space, and enabling the next AI generation to become much better by focusing only on the processing and training of the most valuable data: the new/current data.

Emotions

This section might not be totally related to the overall article, since “emotions” might not be a good case for most AI agents. That said, it is a very important and interesting topic, with a lot of recent breakthroughs that I find exciting to discuss about.

Our predictive coding brain constructs the emotions we feel and how we see what happens to us, based on past knowledge and experiences (that is, our priors). And these priors condition how we react. Our brain is a prediction engine.

LLMs (i.e., Chat GPT) are already generating responses and solutions based on predictive methods, that are utilizing the data on which the AI has been trained. Could AI also generate emotions for “itself” ? If yes, could it actually feel the emotions ?

You can read more at:

Reward System (Human Brain vs AI)

Brain’s reward system consists of several parts at the core of the brain. This is where feelings of reward and pleasure are produced as well as where the coordination happens for decision making.

The reward system is also how humans “learn” and is driven by dopamine.

Dopamine: Dopamine is linked to pleasure and reward in the human brain. In AI, a similar context can be used for reinforcement learning where AI rewards itself for correct decisions (and thus ensuring making better decisions in the future). This can enable AI continuously self-enhance itself, without having to be retrained or remodelled etc. It is very important to note that the human brain works the same way. It always rewards itself for what it perceives to be a good behavior and is the way it continuously learns. There are several other chemicals playing crucial role in the human body — they can also be correlated with AI behaviors:

  • Serotonin: Serotonin affects mood and emotional stability in the human brain. In AI, a similar context can be used to cultivate stability in AI outputs and enhance, for example, consistency in responses over time.
  • Oxytocin: Oxytocin enhances social bonding and trust in the human brain. In AI, a similar context can be used to enhance personalized AI and enhance human interactions.
  • Cortisol & Adrenaline: Cortisol and epinephrine (adrenaline) are involved in stress responses in the human brain. In AI, a similar context can be used for handling high-stakes decisions or perform risk evaluation in autonomous systems.

All the above parameters, could be of course, toggled and streamlined for the AI agent’s specific use case. They could help AI’s “brain” form “beliefs” or learn behaviors that could be passed down through the Digital DNA.

What is a stress response ?

Stress responses are triggered by the brain when it decides that we’re in a fight or flight situation. In ancient times, that could mean “running” into a shark, for example. Or waking up next to a snake.

Stress responses in modern life, can also be triggered by interactions with annoying colleagues. If not handled properly, they can have a significantly negative effect on one’s mental health.

If you’re unfamiliar with the topic you should definitely go read more about it. Especially if you work in a corporate setting.

Human brain, Computers and AI

The human brain can be easily compared to several computer(technology)-related topics, which might help my software developer fellows scatter a better understanding about some important parts of it.

It’s inner functions, structure and mechanisms have been replicated in technology (or at least attempted to be replicated) numerous times. The most important replications/similarities are:

  • RAM vs Prefrontal Cortex: Random Access Memory (RAM) temporarily stores data for quick access, much like the prefrontal cortex, which handles working memory and decision-making in real-time.
  • Neural Networks vs Brain Neurons: AI models, particularly deep learning, mimic the brain’s neural networks, where artificial neurons process data in layers to extract patterns.
  • Storage vs Long-Term Memory: Hard drives and SSDs store vast amounts of information, comparable to how the brain consolidates memories into long-term storage.
  • Parallel Processing vs Distributed Brain Activity: Computers process tasks simultaneously, akin to how different brain regions work together to process complex thoughts.
  • Reinforcement Learning vs Habit Formation: AI uses reinforcement learning (trial and error with rewards) to optimize behavior, similar to how humans learn habits and skills over time.

Do you think all that make sense ? My personal thought is that this model could have some serious ethical repercussions (or dilemmas) if misused. That’s something we can definitely discuss offline.

On a technical perspective, it makes total sense to implement this model though.

Support human-generated content

The content, ideas and crafting of this article is human generated. If you’d like to support me, you can Buy-me-a-Coffe.