Skip to main content

Simulating Knowledge Transfer with Genetic Algorithms in JavaScript

Photo by Sangharsh Lohakare on Unsplash


Genetic algorithms (GAs) are inspired by the process of natural selection, simulating evolution to solve optimization problems. This project explores an unconventional yet fascinating application: simulating knowledge transfer between individuals using genetic algorithms, expressed through JavaScript code. We’ll break down the core concepts and demonstrate how ideas, experiences, and knowledge can “evolve” in a population of virtual individuals.

Understanding the Core Concepts

Before we start, let’s highlight the key components:

  • BrainData: Represents units of knowledge, emotions, ideas, etc.
  • Person: Represents individuals who “learn” and “evolve” based on genetic factors.
  • Genetic Operators: Mutation, crossover, and selection processes simulate the evolution of knowledge.

0. Prerequisites: Setting Up the Environment

Before diving into the code, it’s important to ensure that your development environment is properly set up. Here’s what you need to do:

Step 1: Install Node.js and npm

Node.js is a JavaScript runtime that allows you to run JavaScript code on the server side. It comes with npm (Node Package Manager), which helps you install and manage packages.

Installation Steps:

  • Download and install Node.js from the official website.
  • To verify the installation, open your terminal and run:
node -v
npm -v

This should display the installed versions of Node.js and npm.

Step 2: Create a New Project Directory

Open your terminal and create a new directory for your project. Navigate into that directory:

mkdir ga-project
cd ga-project

Alternatively, you can create a new directory using the file explorer and then open the terminal in that location.

Step 3: Initialize a New Node.js Project

Run the following command to create a package.json file, which will manage your project dependencies:

npm init -y

The -y flag automatically answers "yes" to all prompts, creating a default package.json file.

Step 4: Installing the Dependencies

To install the dependencies, run:

npm install mathjs uuid

1. Data Structures: Defining Knowledge Types and BrainData Class

We first define different types of data, such as knowledge, emotion, and memory, etc., and create a BrainData class to encapsulate these types.

const DataType = Object.freeze({
KNOWLEDGE: "knowledge",
EMOTION: "emotion",
THOUGHT: "thought",
EXPERIENCE: "experience",
MEMORY: "memory",
FACT: "fact",
IDEA: "idea",
QUESTION: "question",
DEFINITION: "definition",
QUOTE: "quote"
});

class BrainData {
constructor(type, topic, data) {
this.type = type;
this.topic = topic;
this.data = data;
this.acquiredVia = null;
}
}

Here, BrainData represents a unit of transferable information. Each instance contains a typetopic, and data, along with metadata about how it was acquired.

2. Simulating Knowledge Repositories

We initialize a collection of BrainData instances that will act as the pool of knowledge for individuals to learn from.

const brainDataCollection = [
new BrainData(DataType.KNOWLEDGE, "algorithm", "A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer"),
new BrainData(DataType.EMOTION, "joy", "A feeling of great pleasure and happiness."),
new BrainData(DataType.FACT, "earth", "The Earth orbits around the Sun."),
new BrainData(DataType.QUOTE, "inspiration", "The only way to do great work is to love what you do. – Steve Jobs"),
new BrainData(DataType.QUESTION, "What is the meaning of life?", "The meaning of life is a philosophical question regarding the purpose and significance of human existence."),
];

To enhance the diversity and complexity of the Genetic Algorithm’s evolution, you can easily expand the brainDataCollection. This collection serves as the initial knowledge base for each individual in the population and can include various data types, such as knowledge, emotions, facts, quotes, or questions. Feel free to add more entries based on your needs or interests.

3. Calculating Text Similarity with TF-IDF in JavaScript

The TF-IDF (Term Frequency-Inverse Document Frequency) algorithm is widely used to evaluate the importance of a word in a document relative to a collection of documents.

Understanding the Core Concepts

TF-IDF is composed of two main parts:

  • Term Frequency (TF): Measures how often a word appears in a document relative to the total number of words in that document.
  • Inverse Document Frequency (IDF): Measures how unique or common a word is across multiple documents (this part is not explicitly computed in this code, as we’re focusing on a simple pairwise comparison).

We also use cosine similarity to calculate the angle between two vectors, representing how similar two documents are.

The TF-IDF Similarity Function

Here’s the complete code that computes the TF-IDF vector for each input and calculates the cosine similarity:

const calculateTFIDF = (target, current) => {
const targetWords = target.split(/\W+/);
const currentWords = current.split(/\W+/);
const wordSet = new Set([...targetWords, ...currentWords]);
const tf = word => arr => arr.filter(w => w === word).length / arr.length;

const targetTFIDF = Array.from(wordSet).map(word => tf(word)(targetWords));
const sourceTFIDF = Array.from(wordSet).map(word => tf(word)(currentWords));

const dotProduct = targetTFIDF.reduce((sum, tf1, idx) => sum + tf1 * sourceTFIDF[idx], 0);
const targetMagnitude = Math.sqrt(targetTFIDF.reduce((sum, tf1) => sum + tf1 ** 2, 0));
const sourceMagnitude = Math.sqrt(sourceTFIDF.reduce((sum, tf2) => sum + tf2 ** 2, 0));

return dotProduct / (targetMagnitude * sourceMagnitude);
};

Breaking Down the Function

1. Splitting the Text:

  • The targetWords and currentWords arrays split the input text into individual words using a regular expression (/\W+/), which matches non-word characters.

2. Creating the Vocabulary Set:

  • wordSet is a Set containing all unique words from both inputs.

3. Term Frequency Calculation:

  • tf(word)(arr) computes the term frequency for each word in the array arr.

4. TF-IDF Vectors:

  • targetTFIDF and sourceTFIDF store the term frequencies of all words from wordSet for both inputs.

5. Dot Product and Magnitudes:

  • dotProduct computes the sum of element-wise products of the two TF-IDF vectors.
  • targetMagnitude and sourceMagnitude calculate the Euclidean norms (magnitudes) of the vectors.

6. Cosine Similarity:

  • The final similarity is calculated as the ratio of the dot product to the product of the magnitudes.

4. The Person Class: Simulating Individuals in the Population

Each Person has unique traits such as learning rate, memory retention, and credibility. This class models learning, mutation, crossover, and influence.

class Person {
constructor(targetBrainData) {
this.id = uuidv4();
this.offspringOf = [];
this.geneFactor = Math.random();
this.learningRate = Math.max(Math.random(), 0.5);
this.learningScalingFactor = Math.max(Math.random(), 0.5);
this.memoryRetentionRate = Math.max(Math.random(), 0.5);
this.influenceRate = Math.max(Math.random(), 0.5);
this.influenceResistanceRate = Math.max(Math.random(), 0.5);
this.influencedBy = [];
this.brainData = [];
this.credibilityScore = 0;
this.targetBrainData = targetBrainData;
}

setInitialBrainData() {
const maxAcquired = Math.floor(brainDataCollection.length * this.learningRate);
const numberToLearn = Math.floor(this.learningScalingFactor * this.learningRate * maxAcquired);
const limitedNumberToLearn = Math.min(numberToLearn, brainDataCollection.length);

const randomIndices = new Set();

while (randomIndices.size < limitedNumberToLearn) {
const randomIndex = Math.floor(Math.random() * brainDataCollection.length);
randomIndices.add(randomIndex);
}

randomIndices.forEach(index => {
const selectedBrainData = brainDataCollection[index];
selectedBrainData.acquiredVia = "initial";

const alreadyExists = this.brainData.some(existingData =>
existingData.type === selectedBrainData.type
&& existingData.topic === selectedBrainData.topic
&& existingData.data === selectedBrainData.data
);

if (!alreadyExists) {
this.brainData.push(selectedBrainData);
}
});

this.updateCredibilityScore();
}
}

Learning Through Mutation and Evolution

    learn() {
const maxAcquired = Math.floor(brainDataCollection.length * this.learningRate);
const numberToLearn = Math.floor(this.learningScalingFactor * this.learningRate * maxAcquired);
const limitedNumberToLearn = Math.min(numberToLearn, brainDataCollection.length);

const randomIndices = new Set();

while (randomIndices.size < limitedNumberToLearn) {
const randomIndex = Math.floor(Math.random() * brainDataCollection.length);
randomIndices.add(randomIndex);
}

randomIndices.forEach(index => {
const selectedBrainData = brainDataCollection[index];
const mutatedData = this.mutateData(selectedBrainData, this.brainData, this.learningRate);

if (mutatedData != null) {
mutatedData.acquiredVia = "learn";

const alreadyExists = this.brainData.some(existingData =>
existingData.type === mutatedData.type
&& existingData.topic === mutatedData.topic
&& existingData.data === mutatedData.data
);

if (!alreadyExists) {
this.brainData.push(mutatedData);
this.increaseLearningRate();
this.increaseMemoryRetentionRate();
}
}
});

this.updateCredibilityScore();
}

mutateData(brainData, existingBrainData) {
const mutatedData = { ...brainData };
const forgettingRate = (1 - this.memoryRetentionRate);
const maxSimilarityRate = 0.9
const minSimilarityRate = 0.7
let mutationOccurred = false;

if (!mutationOccurred && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

mutationOccurred = true;

return null;
}

if (!mutationOccurred && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

const types = Object.values(DataType);
mutatedData.type = types[Math.floor(Math.random() * types.length)];

mutationOccurred = true;
}

if (!mutationOccurred && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

const randomIndex = Math.floor(Math.random() * existingBrainData.length);
mutatedData.topic = existingBrainData[randomIndex].topic;

mutationOccurred = true;
}

if (!mutationOccurred && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

const words = mutatedData.data.split(/\s+/);
const wordIndex = Math.floor(Math.random() * words.length);

words.splice(wordIndex, 1);
mutatedData.data = words.join(" ");

mutationOccurred = true;
}

const sameTypeSimilarTopic = existingBrainData.find(data =>
data.type === mutatedData.type
&& calculateTFIDF(data.topic, mutatedData.topic) > minSimilarityRate
&& calculateTFIDF(data.topic, mutatedData.topic) < maxSimilarityRate
);

if (!mutationOccurred && sameTypeSimilarTopic && Math.random() <= (this.learningRate * this.learningScalingFactor)) {
mutatedData.data = `${sameTypeSimilarTopic.data}. ${mutatedData.data}.`;

mutationOccurred = true;
}

const sameTypeDiffTopic = existingBrainData.find(data =>
data.type === mutatedData.type &&
calculateTFIDF(data.topic, mutatedData.topic) < minSimilarityRate
);

if (!mutationOccurred && sameTypeDiffTopic && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

[mutatedData.data, sameTypeDiffTopic.data] = [sameTypeDiffTopic.data, mutatedData.data];

mutationOccurred = true;
}

const sameTopicDiffType = existingBrainData.find(data =>
data.type !== mutatedData.type &&
calculateTFIDF(data.topic, mutatedData.topic) >= minSimilarityRate
);

if (!mutationOccurred && sameTopicDiffType && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

[mutatedData.data, sameTopicDiffType.data] = [sameTopicDiffType.data, mutatedData.data];

mutationOccurred = true;
}

return mutatedData;
}

Here, individuals “learn” by acquiring knowledge with the potential for mutation. Mutations are controlled by memoryRetentionRate and other probabilistic factors.

Crossover: Combining Knowledge

    crossover(partner) {
const favorParent = this.geneFactor > partner.geneFactor ? this : partner;

const averageLearningRate = (this.learningRate + partner.learningRate) / 2;
const learningRateVariation = (favorParent.learningRate - averageLearningRate) * (Math.random() * 0.5);
const newLearningRate = averageLearningRate + learningRateVariation;

const averageLearningScalingFactor = (this.learningScalingFactor + partner.learningScalingFactor) / 2;
const learningScalingFactorVariation = (favorParent.learningScalingFactor - averageLearningScalingFactor) * (Math.random() * 0.5);
const newLearningScalingFactor = averageLearningScalingFactor + learningScalingFactorVariation;

const averageMemoryRetentionRate = (this.memoryRetentionRate + partner.memoryRetentionRate) / 2;
const memoryRetentionRateVariation = (favorParent.memoryRetentionRate - averageMemoryRetentionRate) * (Math.random() * 0.5);
const newMemoryRetentionRate = averageMemoryRetentionRate + memoryRetentionRateVariation;

const averageInfluenceRate = (this.influenceRate + partner.influenceRate) / 2;
const influenceRateVariation = (favorParent.influenceRate - averageInfluenceRate) * (Math.random() * 0.5);
const newInfluenceRate = averageInfluenceRate + influenceRateVariation;

const averageInfluenceResistanceRate = (this.influenceResistanceRate + partner.influenceResistanceRate) / 2;
const influenceResistanceRateVariation = (favorParent.influenceResistanceRate - averageInfluenceResistanceRate) * (Math.random() * 0.5);
const newInfluenceResistanceRate = averageInfluenceResistanceRate + influenceResistanceRateVariation;

const combinedBrainData = [...new Set([...this.brainData, ...partner.brainData])];

const shuffleRate = (1 - newLearningRate);
const shuffledBrainData = combinedBrainData.sort(() => Math.random() - shuffleRate);

const maxAcquired = Math.floor(combinedBrainData.length * newLearningRate);
const numberToLearn = Math.floor(newLearningScalingFactor * newLearningRate * maxAcquired);
const offspringBrainData = shuffledBrainData.slice(0, Math.max(1, numberToLearn));

offspringBrainData.map(data => {
data.acquiredVia = "crossover"
});

const offspring = new Person(this.targetBrainData);

offspring.id = uuidv4();
offspring.offspringOf = [this.id, partner.id];
offspring.learningRate = newLearningRate;
offspring.learningScalingFactor = newLearningScalingFactor;
offspring.memoryRetentionRate = newMemoryRetentionRate;
offspring.influenceRate = newInfluenceRate;
offspring.influenceResistanceRate = newInfluenceResistanceRate;
offspring.brainData = offspringBrainData;

offspring.updateCredibilityScore();

return offspring;
}

crossover() simulates the combination of knowledge between two individuals, mimicking reproduction in biological systems.

Influence and Social Learning

    influence(targetPerson) {
const baseResistanceThreshold = 1 - targetPerson.influenceResistanceRate;
const influenceDistance = Math.abs(this.influenceRate - targetPerson.influenceResistanceRate);

const dynamicAdjustmentFactor = influenceDistance * 0.5;
const maxInfluenceLimit = Math.max(targetPerson.influenceResistanceRate, this.influenceRate);

const resistanceThreshold = Math.min(baseResistanceThreshold + dynamicAdjustmentFactor, maxInfluenceLimit);


if (this.influenceRate > resistanceThreshold) {
const maxAcquired = Math.floor(this.brainData.length * resistanceThreshold);
const numberToLearn = Math.floor(targetPerson.learningScalingFactor * targetPerson.learningRate * maxAcquired);
const limitedNumberToLearn = Math.min(numberToLearn, this.brainData.length);

const randomIndices = new Set();

while (randomIndices.size < limitedNumberToLearn) {
const randomIndex = Math.floor(Math.random() * this.brainData.length);
randomIndices.add(randomIndex);
}

randomIndices.forEach(index => {
const selectedBrainData = this.brainData[index];
selectedBrainData.acquiredVia = "influence";

const alreadyExists = targetPerson.brainData.some(existingData =>
existingData.type === selectedBrainData.type
&& existingData.topic === selectedBrainData.topic
&& existingData.data === selectedBrainData.data
);

if (!alreadyExists) {
targetPerson.brainData.push(selectedBrainData);
} else {
targetPerson.brainData[alreadyExists] = selectedBrainData;
}

this.increaseInfluenceRate();

targetPerson.increaseLearningRate();
targetPerson.increaseMemoryRetentionRate();
targetPerson.decreaseInfluenceResistanceRate();
});

targetPerson.influencedBy.push(this.id);

targetPerson.updateCredibilityScore();
} else {
this.decreaseInfluenceRate();
targetPerson.increaseInfluenceResistanceRate();
}
}

influence() allows one individual to transfer knowledge to another based on their influence rates, simulating real-world knowledge dissemination.

Dynamic Learning Rate Adjustment

The increaseLearningRate and decreaseLearningRate functions adjust the rate at which the model learns from new data. This simulates a model becoming more or less sensitive to incoming information over time.

    increaseLearningRate() {
const percentage = 0.025;
this.learningRate = Math.min(this.learningRate * (1 + percentage), 1);
this.learningScalingFactor = Math.min(this.learningScalingFactor * (1 + (percentage / 2)), 1);
}

decreaseLearningRate() {
const percentage = 0.025;
this.learningRate = Math.max(this.learningRate * (1 - (percentage / 2)), 0);
this.learningScalingFactor = Math.max(this.learningScalingFactor * (1 - (percentage / 4)), 0);
}

Memory Retention Rate Adjustment

These functions modify how well the model retains previous information, simulating the concept of “forgetting” or improved recall over time.

    increaseMemoryRetentionRate() {
const percentage = 0.01;
this.memoryRetentionRate = Math.min(this.memoryRetentionRate * (1 + percentage), 1);
}

decreaseMemoryRetentionRate() {
const percentage = 0.01;
this.memoryRetentionRate = Math.max(this.memoryRetentionRate * (1 - (percentage / 2)), 0);
}

Influence Rate and Resistance Adjustments

These functions manage how the model is influenced by external data or resists influence, enabling adaptability in dynamic environments.

    increaseInfluenceRate() {
const percentage = 0.01;
this.influenceRate = Math.min(this.influenceRate * (1 + percentage), 1);
}

decreaseInfluenceRate() {
const percentage = 0.01;
this.influenceRate = Math.max(this.influenceRate * (1 - (percentage / 2)), 0);
}

increaseInfluenceResistanceRate() {
const percentage = 0.01;
this.influenceResistanceRate = Math.min(this.influenceResistanceRate * (1 + percentage), 1);
}

decreaseInfluenceResistanceRate() {
const percentage = 0.01;
this.influenceResistanceRate = Math.max(this.influenceResistanceRate * (1 - (percentage / 2)), 0);
}

Credibility Score Calculation

This score evaluates how a Person’s data points are closely aligned with the target data, based on type, topic, and content relevance, using weighted TF-IDF comparisons.

The updateCredibilityScore function uses three key components:

  1. Type Similarity (weighted at 10%)
  2. Topic Similarity (weighted at 20%)
  3. Data Content Similarity (weighted at 70%)

These weights are adjustable to emphasize certain aspects of the comparison more heavily than others.

    updateCredibilityScore() {
const weightOfType = 0.1;
const weightOfTopic = 0.2;
const weightOfData = 0.7;

let totalScore = 0;

this.targetBrainData.forEach(target => {
let highestScoreForTarget = 0;

this.brainData.forEach(data => {
const typeScore = calculateTFIDF(target.type, data.type) * weightOfType;
const topicScore = calculateTFIDF(target.topic, data.topic) * weightOfTopic;
const dataScore = calculateTFIDF(target.data, data.data) * weightOfData;

const totalScoreForData = typeScore + topicScore + dataScore;

if (totalScoreForData > highestScoreForTarget) {
highestScoreForTarget = totalScoreForData;
}
});

totalScore += highestScoreForTarget;
});

this.credibilityScore = (totalScore / this.targetBrainData.length);
}

Displaying the Person’s Information

    displayInfo(verbose) {
console.log(`Id: ${this.id}`);

if (this.offspringOf.length > 0 && verbose) {
console.log(`Offspring Of:`);
this.offspringOf.forEach((parent, index) => {
console.log(` ${index + 1}. ${parent}`);
})
}

console.log(`Gene Factor: ${(this.geneFactor * 100).toFixed(2)}%`);
console.log(`Learning Rate: ${(this.learningRate * 100).toFixed(2)}%`);
console.log(`Learning Scaling Factor: ${(this.learningScalingFactor * 100).toFixed(2)}%`);
console.log(`Memory Retention Rate: ${(this.memoryRetentionRate * 100).toFixed(2)}%`);
console.log(`Influence Rate: ${(this.influenceRate * 100).toFixed(2)}%`);
console.log(`Influence Resistance Rate: ${(this.influenceResistanceRate * 100).toFixed(2)}%`);

if (this.influencedBy.length > 0 && verbose) {
console.log(`Influenced By:`);
this.influencedBy.forEach((influencer, index) => {
console.log(` ${index + 1}. ${influencer}`);
})
}

if (verbose) {
console.log(`Brain Data:`);
this.brainData.forEach((data, index) => {
console.log(` ${index + 1}. [${data.acquiredVia}] ${data.type} - ${data.topic} - ${data.data}`);
});
}

let acquiredViaInitial = 0;
let acquiredViaLearn = 0;
let acquiredViaCrossover = 0;
let acquiredViaInfluence = 0;

this.brainData.forEach((data, index) => {
if (data.acquiredVia == 'initial') {
acquiredViaInitial += 1;
}

if (data.acquiredVia == 'learn') {
acquiredViaLearn += 1;
}

if (data.acquiredVia == 'crossover') {
acquiredViaCrossover += 1;
}

if (data.acquiredVia == 'influence') {
acquiredViaInfluence += 1;
}

});

console.log(`Total Acquired Via Initial: ${acquiredViaInitial}`);
console.log(`Total Acquired Via Learn: ${acquiredViaLearn}`);
console.log(`Total Acquired Via Crossover: ${acquiredViaCrossover}`);
console.log(`Total Acquired Via Influence: ${acquiredViaInfluence}`);
console.log(`Total Brain Data: ${this.brainData.length}`);

if (verbose) {
console.log(`Target Data:`);
this.targetBrainData.forEach((target, index) => {
console.log(` ${index + 1}. ${target.type} - ${target.topic} - ${target.data}`);
});
}
console.log(`Credibility Score: ${(this.credibilityScore * 100).toFixed(2)}%`);
console.log('\n');
}

5. The Population Class

In the Genetic Algorithm (GA) framework, the Population class serves as the core driver for evolving solutions over successive generations. It manages a collection of individuals (Person objects), optimizes the population through elitism, crossover, and learning, and aims to achieve a perfect solution where the credibilityScore reaches 100%.

class Population {
constructor(targetBrainData, populationSize) {
this.people = [];
this.targetBrainData = targetBrainData;
this.generationNo = 0;

while (populationSize--) {
const person = new Person(target);
person.setInitialBrainData(brainDataCollection);

this.people.push(person);
}

this.elitismCount = Math.ceil(this.people.length * 0.1);
}

sort() {
this.people.sort((a, b) => {
return b.credibilityScore - a.credibilityScore;
})
}

showGeneration() {
console.log(`Generation: ${this.generationNo}`);
console.log(`Elitism Count: ${this.elitismCount}`);
this.people.map(person => {
let verbose = false

if (person.credibilityScore == 1) {
verbose = true
}

person.displayInfo(verbose);
})
}

populate() {
this.sort();
this.showGeneration();

let isPerfectGeneration = false;

const offspringCount = this.people.length - this.elitismCount;
const newGeneration = this.people.slice(0, this.elitismCount);

for (let i = 0; i < offspringCount; i++) {
const parent1 = this.people[i % 2];
let parent2;

do {
parent2 = this.people[Math.floor(Math.random() * this.people.length)];
} while (parent2 === parent1);

const offspring = parent1.crossover(parent2);

newGeneration.push(offspring);
}

newGeneration.forEach(person => {
person.learn();

let targetPerson;

do {
targetPerson = this.people[Math.floor(Math.random() * this.people.length)];
} while (targetPerson === person);

person.influence(targetPerson);

if (person.credibilityScore == 1) {
isPerfectGeneration = true;
}
});

this.people = newGeneration;

if (isPerfectGeneration) {
console.log("\nPerfect generation found!\n");
this.sort();
this.showGeneration();
console.log(`Last Generation No: ${this.generationNo}\n`);
} else {
this.generationNo++;
setTimeout(() => this.populate(), 100);
}
}
}

Key Components

1. Constructor Initialization:

  • Initializes a population of Person objects with targetBrainData.
  • Uses setInitialBrainData to seed initial knowledge.
  • Implements elitism by calculating the top 10% (elitismCount) of the population to carry forward.

2. Sorting and Display:

  • sort(): Sorts the population by credibilityScore in descending order.
  • showGeneration(): Displays information about the current generation, and uses the displayInfo method to print data in either verbose or summary mode.

3. Population Evolution:

  • Crossover: Randomly selects pairs of parents to generate offspring.
  • Learning and Influence: Individuals learn independently and are influenced by random peers.
  • The loop continues until a perfect solution (credibility score of 1) is found.

4. Recursive Evolution:

  • populate() is recursive via setTimeout, introducing a delay between generations for asynchronous processing.

6. Execution of the Genetic Algorithm

To demonstrate the full lifecycle of the Genetic Algorithm (GA), we’ll initialize the Population class with a target dataset (BrainData) and specify a population size. The process is then kicked off with the populate() method, which evolves the population until an optimal solution is found.

const target = [
new BrainData(DataType.KNOWLEDGE, "algorithm", "A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer"),
new BrainData(DataType.QUESTION, "life", "What is the meaning of life?")
]

const populationSize = 5;

let population = new Population(target, populationSize);
population.populate();

Explanation

1. Target Data Definition:

  • The target array contains BrainData objects that represent the desired outcome for the population.
  • The first entry focuses on “algorithm” knowledge, while the second is an existential question about life.

2. Population Initialization:

  • populationSize is set to 5, indicating that the initial generation will consist of five individuals.
  • new Population(target, populationSize) creates the population with the specified target data.

3. Population Evolution:

  • population.populate() starts the iterative GA process, where generations evolve until a perfect solution is achieved or it reaches convergence.

Complete Full Code of the Project

const { v4: uuidv4 } = require('uuid');

const DataType = Object.freeze({
KNOWLEDGE: "knowledge",
EMOTION: "emotion",
THOUGHT: "thought",
EXPERIENCE: "experience",
MEMORY: "memory",
FACT: "fact",
IDEA: "idea",
QUESTION: "question",
DEFINITION: "definition",
QUOTE: "quote"
});

class BrainData {
constructor(type, topic, data) {
this.type = type;
this.topic = topic;
this.data = data;
this.acquiredVia = null;
}
}

const brainDataCollection = [
new BrainData(DataType.KNOWLEDGE, "algorithm", "A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer"),
new BrainData(DataType.EMOTION, "joy", "A feeling of great pleasure and happiness."),
new BrainData(DataType.FACT, "earth", "The Earth orbits around the Sun."),
new BrainData(DataType.QUOTE, "inspiration", "The only way to do great work is to love what you do. – Steve Jobs"),
new BrainData(DataType.QUESTION, "What is the meaning of life?", "The meaning of life is a philosophical question regarding the purpose and significance of human existence."),
];

const calculateTFIDF = (target, current) => {
const targetWords = target.split(/\W+/);
const currentWords = current.split(/\W+/);
const wordSet = new Set([...targetWords, ...currentWords]);
const tf = word => arr => arr.filter(w => w === word).length / arr.length;

const targetTFIDF = Array.from(wordSet).map(word => tf(word)(targetWords));
const sourceTFIDF = Array.from(wordSet).map(word => tf(word)(currentWords));

const dotProduct = targetTFIDF.reduce((sum, tf1, idx) => sum + tf1 * sourceTFIDF[idx], 0);
const targetMagnitude = Math.sqrt(targetTFIDF.reduce((sum, tf1) => sum + tf1 ** 2, 0));
const sourceMagnitude = Math.sqrt(sourceTFIDF.reduce((sum, tf2) => sum + tf2 ** 2, 0));

return dotProduct / (targetMagnitude * sourceMagnitude);
};

class Person {
constructor(targetBrainData) {
this.id = uuidv4();
this.offspringOf = [];
this.geneFactor = Math.random();
this.learningRate = Math.max(Math.random(), 0.5);
this.learningScalingFactor = Math.max(Math.random(), 0.5);
this.memoryRetentionRate = Math.max(Math.random(), 0.5);
this.influenceRate = Math.max(Math.random(), 0.5);
this.influenceResistanceRate = Math.max(Math.random(), 0.5);
this.influencedBy = [];
this.brainData = [];
this.credibilityScore = 0;
this.targetBrainData = targetBrainData;
}

setInitialBrainData() {
const maxAcquired = Math.floor(brainDataCollection.length * this.learningRate);
const numberToLearn = Math.floor(this.learningScalingFactor * this.learningRate * maxAcquired);
const limitedNumberToLearn = Math.min(numberToLearn, brainDataCollection.length);

const randomIndices = new Set();

while (randomIndices.size < limitedNumberToLearn) {
const randomIndex = Math.floor(Math.random() * brainDataCollection.length);
randomIndices.add(randomIndex);
}

randomIndices.forEach(index => {
const selectedBrainData = brainDataCollection[index];
selectedBrainData.acquiredVia = "initial";

const alreadyExists = this.brainData.some(existingData =>
existingData.type === selectedBrainData.type
&& existingData.topic === selectedBrainData.topic
&& existingData.data === selectedBrainData.data
);

if (!alreadyExists) {
this.brainData.push(selectedBrainData);
}
});

this.updateCredibilityScore();
}

learn() {
const maxAcquired = Math.floor(brainDataCollection.length * this.learningRate);
const numberToLearn = Math.floor(this.learningScalingFactor * this.learningRate * maxAcquired);
const limitedNumberToLearn = Math.min(numberToLearn, brainDataCollection.length);

const randomIndices = new Set();

while (randomIndices.size < limitedNumberToLearn) {
const randomIndex = Math.floor(Math.random() * brainDataCollection.length);
randomIndices.add(randomIndex);
}

randomIndices.forEach(index => {
const selectedBrainData = brainDataCollection[index];
const mutatedData = this.mutateData(selectedBrainData, this.brainData, this.learningRate);

if (mutatedData != null) {
mutatedData.acquiredVia = "learn";

const alreadyExists = this.brainData.some(existingData =>
existingData.type === mutatedData.type
&& existingData.topic === mutatedData.topic
&& existingData.data === mutatedData.data
);

if (!alreadyExists) {
this.brainData.push(mutatedData);
this.increaseLearningRate();
this.increaseMemoryRetentionRate();
}
}
});

this.updateCredibilityScore();
}

mutateData(brainData, existingBrainData) {
const mutatedData = { ...brainData };
const forgettingRate = (1 - this.memoryRetentionRate);
const maxSimilarityRate = 0.9
const minSimilarityRate = 0.7
let mutationOccurred = false;

if (!mutationOccurred && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

mutationOccurred = true;

return null;
}

if (!mutationOccurred && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

const types = Object.values(DataType);
mutatedData.type = types[Math.floor(Math.random() * types.length)];

mutationOccurred = true;
}

if (!mutationOccurred && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

const randomIndex = Math.floor(Math.random() * existingBrainData.length);
mutatedData.topic = existingBrainData[randomIndex].topic;

mutationOccurred = true;
}

if (!mutationOccurred && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

const words = mutatedData.data.split(/\s+/);
const wordIndex = Math.floor(Math.random() * words.length);

words.splice(wordIndex, 1);
mutatedData.data = words.join(" ");

mutationOccurred = true;
}

const sameTypeSimilarTopic = existingBrainData.find(data =>
data.type === mutatedData.type
&& calculateTFIDF(data.topic, mutatedData.topic) > minSimilarityRate
&& calculateTFIDF(data.topic, mutatedData.topic) < maxSimilarityRate
);

if (!mutationOccurred && sameTypeSimilarTopic && Math.random() <= (this.learningRate * this.learningScalingFactor)) {
mutatedData.data = `${sameTypeSimilarTopic.data}. ${mutatedData.data}.`;

mutationOccurred = true;
}

const sameTypeDiffTopic = existingBrainData.find(data =>
data.type === mutatedData.type &&
calculateTFIDF(data.topic, mutatedData.topic) < minSimilarityRate
);

if (!mutationOccurred && sameTypeDiffTopic && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

[mutatedData.data, sameTypeDiffTopic.data] = [sameTypeDiffTopic.data, mutatedData.data];

mutationOccurred = true;
}

const sameTopicDiffType = existingBrainData.find(data =>
data.type !== mutatedData.type &&
calculateTFIDF(data.topic, mutatedData.topic) >= minSimilarityRate
);

if (!mutationOccurred && sameTopicDiffType && Math.random() < forgettingRate) {
this.decreaseLearningRate();
this.decreaseMemoryRetentionRate();

[mutatedData.data, sameTopicDiffType.data] = [sameTopicDiffType.data, mutatedData.data];

mutationOccurred = true;
}

return mutatedData;
}

crossover(partner) {
const favorParent = this.geneFactor > partner.geneFactor ? this : partner;

const averageLearningRate = (this.learningRate + partner.learningRate) / 2;
const learningRateVariation = (favorParent.learningRate - averageLearningRate) * (Math.random() * 0.5);
const newLearningRate = averageLearningRate + learningRateVariation;

const averageLearningScalingFactor = (this.learningScalingFactor + partner.learningScalingFactor) / 2;
const learningScalingFactorVariation = (favorParent.learningScalingFactor - averageLearningScalingFactor) * (Math.random() * 0.5);
const newLearningScalingFactor = averageLearningScalingFactor + learningScalingFactorVariation;

const averageMemoryRetentionRate = (this.memoryRetentionRate + partner.memoryRetentionRate) / 2;
const memoryRetentionRateVariation = (favorParent.memoryRetentionRate - averageMemoryRetentionRate) * (Math.random() * 0.5);
const newMemoryRetentionRate = averageMemoryRetentionRate + memoryRetentionRateVariation;

const averageInfluenceRate = (this.influenceRate + partner.influenceRate) / 2;
const influenceRateVariation = (favorParent.influenceRate - averageInfluenceRate) * (Math.random() * 0.5);
const newInfluenceRate = averageInfluenceRate + influenceRateVariation;

const averageInfluenceResistanceRate = (this.influenceResistanceRate + partner.influenceResistanceRate) / 2;
const influenceResistanceRateVariation = (favorParent.influenceResistanceRate - averageInfluenceResistanceRate) * (Math.random() * 0.5);
const newInfluenceResistanceRate = averageInfluenceResistanceRate + influenceResistanceRateVariation;

const combinedBrainData = [...new Set([...this.brainData, ...partner.brainData])];

const shuffleRate = (1 - newLearningRate);
const shuffledBrainData = combinedBrainData.sort(() => Math.random() - shuffleRate);

const maxAcquired = Math.floor(combinedBrainData.length * newLearningRate);
const numberToLearn = Math.floor(newLearningScalingFactor * newLearningRate * maxAcquired);
const offspringBrainData = shuffledBrainData.slice(0, Math.max(1, numberToLearn));

offspringBrainData.map(data => {
data.acquiredVia = "crossover"
});

const offspring = new Person(this.targetBrainData);

offspring.id = uuidv4();
offspring.offspringOf = [this.id, partner.id];
offspring.learningRate = newLearningRate;
offspring.learningScalingFactor = newLearningScalingFactor;
offspring.memoryRetentionRate = newMemoryRetentionRate;
offspring.influenceRate = newInfluenceRate;
offspring.influenceResistanceRate = newInfluenceResistanceRate;
offspring.brainData = offspringBrainData;

offspring.updateCredibilityScore();

return offspring;
}

influence(targetPerson) {
const baseResistanceThreshold = 1 - targetPerson.influenceResistanceRate;
const influenceDistance = Math.abs(this.influenceRate - targetPerson.influenceResistanceRate);

const dynamicAdjustmentFactor = influenceDistance * 0.5;
const maxInfluenceLimit = Math.max(targetPerson.influenceResistanceRate, this.influenceRate);

const resistanceThreshold = Math.min(baseResistanceThreshold + dynamicAdjustmentFactor, maxInfluenceLimit);


if (this.influenceRate > resistanceThreshold) {
const maxAcquired = Math.floor(this.brainData.length * resistanceThreshold);
const numberToLearn = Math.floor(targetPerson.learningScalingFactor * targetPerson.learningRate * maxAcquired);
const limitedNumberToLearn = Math.min(numberToLearn, this.brainData.length);

const randomIndices = new Set();

while (randomIndices.size < limitedNumberToLearn) {
const randomIndex = Math.floor(Math.random() * this.brainData.length);
randomIndices.add(randomIndex);
}

randomIndices.forEach(index => {
const selectedBrainData = this.brainData[index];
selectedBrainData.acquiredVia = "influence";

const alreadyExists = targetPerson.brainData.some(existingData =>
existingData.type === selectedBrainData.type
&& existingData.topic === selectedBrainData.topic
&& existingData.data === selectedBrainData.data
);

if (!alreadyExists) {
targetPerson.brainData.push(selectedBrainData);
} else {
targetPerson.brainData[alreadyExists] = selectedBrainData;
}

this.increaseInfluenceRate();

targetPerson.increaseLearningRate();
targetPerson.increaseMemoryRetentionRate();
targetPerson.decreaseInfluenceResistanceRate();
});

targetPerson.influencedBy.push(this.id);

targetPerson.updateCredibilityScore();
} else {
this.decreaseInfluenceRate();
targetPerson.increaseInfluenceResistanceRate();
}
}

increaseLearningRate() {
const percentage = 0.025;
this.learningRate = Math.min(this.learningRate * (1 + percentage), 1);
this.learningScalingFactor = Math.min(this.learningScalingFactor * (1 + (percentage / 2)), 1);
}

decreaseLearningRate() {
const percentage = 0.025;
this.learningRate = Math.max(this.learningRate * (1 - (percentage / 2)), 0);
this.learningScalingFactor = Math.max(this.learningScalingFactor * (1 - (percentage / 4)), 0);
}

increaseMemoryRetentionRate() {
const percentage = 0.01;
this.memoryRetentionRate = Math.min(this.memoryRetentionRate * (1 + percentage), 1);
}

decreaseMemoryRetentionRate() {
const percentage = 0.01;
this.memoryRetentionRate = Math.max(this.memoryRetentionRate * (1 - (percentage / 2)), 0);
}

increaseInfluenceRate() {
const percentage = 0.01;
this.influenceRate = Math.min(this.influenceRate * (1 + percentage), 1);
}

decreaseInfluenceRate() {
const percentage = 0.01;
this.influenceRate = Math.max(this.influenceRate * (1 - (percentage / 2)), 0);
}

increaseInfluenceResistanceRate() {
const percentage = 0.01;
this.influenceResistanceRate = Math.min(this.influenceResistanceRate * (1 + percentage), 1);
}

decreaseInfluenceResistanceRate() {
const percentage = 0.01;
this.influenceResistanceRate = Math.max(this.influenceResistanceRate * (1 - (percentage / 2)), 0);
}

updateCredibilityScore() {
const weightOfType = 0.1;
const weightOfTopic = 0.2;
const weightOfData = 0.7;

let totalScore = 0;

this.targetBrainData.forEach(target => {
let highestScoreForTarget = 0;

this.brainData.forEach(data => {
const typeScore = calculateTFIDF(target.type, data.type) * weightOfType;
const topicScore = calculateTFIDF(target.topic, data.topic) * weightOfTopic;
const dataScore = calculateTFIDF(target.data, data.data) * weightOfData;

const totalScoreForData = typeScore + topicScore + dataScore;

if (totalScoreForData > highestScoreForTarget) {
highestScoreForTarget = totalScoreForData;
}
});

totalScore += highestScoreForTarget;
});

this.credibilityScore = (totalScore / this.targetBrainData.length);
}

displayInfo(verbose) {
console.log(`Id: ${this.id}`);

if (this.offspringOf.length > 0 && verbose) {
console.log(`Offspring Of:`);
this.offspringOf.forEach((parent, index) => {
console.log(` ${index + 1}. ${parent}`);
})
}

console.log(`Gene Factor: ${(this.geneFactor * 100).toFixed(2)}%`);
console.log(`Learning Rate: ${(this.learningRate * 100).toFixed(2)}%`);
console.log(`Learning Scaling Factor: ${(this.learningScalingFactor * 100).toFixed(2)}%`);
console.log(`Memory Retention Rate: ${(this.memoryRetentionRate * 100).toFixed(2)}%`);
console.log(`Influence Rate: ${(this.influenceRate * 100).toFixed(2)}%`);
console.log(`Influence Resistance Rate: ${(this.influenceResistanceRate * 100).toFixed(2)}%`);

if (this.influencedBy.length > 0 && verbose) {
console.log(`Influenced By:`);
this.influencedBy.forEach((influencer, index) => {
console.log(` ${index + 1}. ${influencer}`);
})
}

if (verbose) {
console.log(`Brain Data:`);
this.brainData.forEach((data, index) => {
console.log(` ${index + 1}. [${data.acquiredVia}] ${data.type} - ${data.topic} - ${data.data}`);
});
}

let acquiredViaInitial = 0;
let acquiredViaLearn = 0;
let acquiredViaCrossover = 0;
let acquiredViaInfluence = 0;

this.brainData.forEach((data, index) => {
if (data.acquiredVia == 'initial') {
acquiredViaInitial += 1;
}

if (data.acquiredVia == 'learn') {
acquiredViaLearn += 1;
}

if (data.acquiredVia == 'crossover') {
acquiredViaCrossover += 1;
}

if (data.acquiredVia == 'influence') {
acquiredViaInfluence += 1;
}

});

console.log(`Total Acquired Via Initial: ${acquiredViaInitial}`);
console.log(`Total Acquired Via Learn: ${acquiredViaLearn}`);
console.log(`Total Acquired Via Crossover: ${acquiredViaCrossover}`);
console.log(`Total Acquired Via Influence: ${acquiredViaInfluence}`);
console.log(`Total Brain Data: ${this.brainData.length}`);

if (verbose) {
console.log(`Target Data:`);
this.targetBrainData.forEach((target, index) => {
console.log(` ${index + 1}. ${target.type} - ${target.topic} - ${target.data}`);
});
}
console.log(`Credibility Score: ${(this.credibilityScore * 100).toFixed(2)}%`);
console.log('\n');
}
}

class Population {
constructor(targetBrainData, populationSize) {
this.people = [];
this.targetBrainData = targetBrainData;
this.generationNo = 0;

while (populationSize--) {
const person = new Person(target);
person.setInitialBrainData(brainDataCollection);

this.people.push(person);
}

this.elitismCount = Math.ceil(this.people.length * 0.1);
}

sort() {
this.people.sort((a, b) => {
return b.credibilityScore - a.credibilityScore;
})
}

showGeneration() {
console.log(`Generation: ${this.generationNo}`);
console.log(`Elitism Count: ${this.elitismCount}`);
this.people.map(person => {
let verbose = false

if (person.credibilityScore == 1) {
verbose = true
}

person.displayInfo(verbose);
})
}

populate() {
this.sort();
this.showGeneration();

let isPerfectGeneration = false;

const offspringCount = this.people.length - this.elitismCount;
const newGeneration = this.people.slice(0, this.elitismCount);

for (let i = 0; i < offspringCount; i++) {
const parent1 = this.people[i % 2];
let parent2;

do {
parent2 = this.people[Math.floor(Math.random() * this.people.length)];
} while (parent2 === parent1);

const offspring = parent1.crossover(parent2);

newGeneration.push(offspring);
}

newGeneration.forEach(person => {
person.learn();

let targetPerson;

do {
targetPerson = this.people[Math.floor(Math.random() * this.people.length)];
} while (targetPerson === person);

person.influence(targetPerson);

if (person.credibilityScore == 1) {
isPerfectGeneration = true;
}
});

this.people = newGeneration;

if (isPerfectGeneration) {
console.log("\nPerfect generation found!\n");
this.sort();
this.showGeneration();
console.log(`Last Generation No: ${this.generationNo}\n`);
} else {
this.generationNo++;
setTimeout(() => this.populate(), 100);
}
}
}

console.log('\n');

const target = [
new BrainData(DataType.KNOWLEDGE, "algorithm", "A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer"),
new BrainData(DataType.QUESTION, "life", "What is the meaning of life?")
]

const populationSize = 5;

let population = new Population(target, populationSize);
population.populate();

Conclusion

This code demonstrates how genetic algorithms can simulate knowledge transfer between individuals in a population. By combining mutation, crossover, and influence mechanisms, we can explore how ideas evolve over time. This model can be extended to simulate real-world scenarios like innovation spread, collaborative learning, or even artificial intelligence development.

This project illustrates the potential of blending genetic algorithms with cognitive processes for fascinating simulations and experiments.

Comments

Popular posts from this blog

Understanding Number Systems: Decimal, Binary, and Hexadecimal

In everyday life, we use numbers all the time, whether for counting, telling time, or handling money. The number system we’re most familiar with is the   decimal system , but computers use other systems, such as   binary   and   hexadecimal . Let’s break down these number systems to understand how they work. What is a Number System? A number system is a way of representing numbers using a set of symbols and rules. The most common number systems are: Decimal (Base 10) Binary (Base 2) Hexadecimal (Base 16) Each system has a different “base” that tells us how many unique digits (symbols) are used to represent numbers. Decimal Number System (Base 10) This is the system we use daily. It has  10 digits , ranging from  0 to 9 . Example: The number  529  in decimal means: 5 × 1⁰² + 2 × 1⁰¹ + 9 × 1⁰⁰ =  500 + 20 + 9 = 529 Each position represents a power of 10, starting from the rightmost digit. Why Base 10? Decimal is base 10 because it has 10 digits...

How to Monetize Your API as an Individual Developer While Hosting on Your Own Server?

In the API economy, cloud services like AWS, Google Cloud, and Azure offer many conveniences, such as scaling and infrastructure management. However, some developers prefer more control and autonomy, opting to host their APIs on personal servers. Whether for cost efficiency, data privacy, or customization, hosting your own API comes with both advantages and challenges. But, even without cloud platforms, there are effective ways to monetize your API. This guide will explore how individual developers can successfully monetize their APIs while hosting them on their own servers. Why Host Your API on Your Own Server? Hosting your own API gives you full control over the infrastructure and potentially lower long-term costs. Here’s why some developers choose this approach: Cost Control : Instead of paying ongoing cloud fees, you may opt for a one-time or lower-cost hosting solution that fits your budget and resource needs. Data Ownership : You have full control over data, which is critical if ...

API Testing with Jest and Supertest: A Step-by-Step Guide

API testing is essential to ensure your endpoints behave as expected across all scenarios. In this guide, we’ll explore how to use Jest and Supertest to test a sample API with various response types, including success, authentication errors, and validation errors. By the end, you’ll understand how to apply these tools to check for different response structures and status codes. 0. Prerequisites: Setting Up Your Environment Before diving into API testing, it’s important to ensure that your development environment is properly set up. Here’s what you need to do: Step 1: Install Node.js and npm Node.js  is a JavaScript runtime that allows you to run JavaScript code on the server side. It comes with  npm  (Node Package Manager), which helps you install and manage packages. Installation Steps: Download and install Node.js from the  official website . To verify the installation, open your terminal and run: node -v npm -v This should display the installed versions of Node.js...

The Weight of Responsibility: A Developer’s Journey to Balance Passion and Reality

For the past several years, Eddie has been on a steady climb in his career as a developer, but recently, he found himself at a crossroads — caught between the weight of his responsibilities and the desire to pursue his true passions. His journey began with a three-month internship as a web developer, which led to nearly four years in an application developer role. After that, he spent almost a year as a systems associate, managing tasks across systems analysis, quality assurance, and business analysis. Eventually, he returned to full-time software development for another two years before transitioning into more complex roles. For over a year, he worked as a multi-role software developer and database administrator before stepping into his current position as a senior software developer, database administrator, and cloud administrator — occasionally handling security tasks as well. Now, with over 8 years of professional experience, he also leads a small team of developers, which has been...

Avoiding Confusion in API Design: The Importance of Clear Responses

In today’s fast-paced software development landscape, APIs play a crucial role in connecting services and enabling functionality. However, poor design choices can lead to confusion and inefficiency for both developers and users. One such choice is the omission of a response body for successful requests, a practice I recently encountered in an enterprise API designed for bill payments. The Case of the No-Response API The API in question serves two main endpoints: one for inquiring about account validity and another for confirming payment. When successful, the API returned a  200 OK  status but no response body. This design choice led to significant confusion during our integration process. Even the internal team who developed the said API struggled to justify this approach, revealing a lack of clarity around the rationale behind it. Pros of This Design Choice While the intention behind this design may have been to streamline responses, several potential benefits can be identifi...