top of page


Brain consist of neural network. How can a baby say "Cat" when baby look at cat? How network learn about family, animals, daily tasks & nature. How brain differentiate sounds and music?




When a baby look at cat, how baby spoke cat at same time?

Ali Parents say cat, when they see cat in front of them. This event occur as a trigger in ali mind and activate some pattern in his mind. Visual and sound pattern in different networks, at the same time. When a network receive same event as a trigger, over and over it'll strength the network weight. That's cause , when ali see a cat at the door and he shout "cat".





This solve data traversal or information processing problem in network. Network store some event information. Information distribute over the network. Each object represent some pattern in our mind fig -2. Let's say I'm wake up early in the morning but yesterday I had woke up late morning, again neural network produce some pattern, which will be store in memory called episodic memory.

Let's say


Initially: [0,0,0,0]

Object : cat

output: o[0.1,0.2, 0,-1]

o[0.1 > 0.05 ] = [0.1] // associate

o[0.2 > 0.05] = [0.2]

o[0 > 0.05] = [0] // inhibit

o[-1 > 0.05] = [0]

Pattern follow associate and inhibit. Associate (+) , Inhibit (-). This pattern 10 billion long bits.



When a brain stroke or something happen , in most cases, suffer forget most of the information due to accident. Neural network remove various links, during the event but after sometime neurons regenerate neurons. According to connectist theory everything , you see, remember, taste, smell or touch is just a pattern of neurons or array of signals. If this is the case then, why our deep model, is intelligent like us? How bits understand cognition problem? Is artificial cognition solve through this proposed theory?


Thank you have a nice day

0 views0 comments

Strong AI

If you're fond of General AI, then you must familiar about weak ai and strong ai. Today what we have try to achieve is to automate basic tasks such as chatting, reading books, face expression, sound etc. Sometime classification best approach with respect to data, but sometime it's worst case according to the data, so we move pattern base techniques or reinforcement techniques and so on.

However Strong AI is different, because memory, cognition, perception, dreams, motives are under discussion. However Write a code for strong ai is very hectic task, even you have algorithm.


Behaviour Program

Will you code this in your language. no, because there're limitations that can overcome.

First of all you have sets of neurons, motor neurons, sensory neurons and then you have to language that execute according to given algorithm.

If you are curious like me then study principles synthesis intelligence. Since 1970 scientists divide ai into two types.

1. Week AI

2. Strong AI




After 1990, programmer can finally code in week ai. Many techniques you know such as classification, clustering, gan(GAN), convolution network (CNN) etc. In this field you have data , algorithm and model, but model need more data, and more processing power. For example previous days deep fakes videos and pictures are very viral on social media. At least 1 to 2 days require to generate these images on your laptop (if you're laptop install gpu then).



However in second approach you have to construct links, hicherachies etc that are link with sensors and actuators, registers, language and interface etc. It's work both sequential and parallel but you have a good book about cognitive science and computer science. Strong AI is generalize form of ai, while weak ai is narrow form. Both need Artificial Neural Networks. If you want to then contact me alideveloper95@gmail.com, altogether we can solve this. Have a nice day , Thankyou

Updated: Oct 21, 2019

Many people work in artificial neural nets late 1990. Are you one of them? The field where ann model will use called "machine learning" and now this field called deep learning where we study deep learning methods on various deep learning model. Initially a group of scientists build mathematical model ann.


Ann

Dietrich Dörner ann model little bit different then above. According to Dörner neurons are consist of

  1. Activity "A"

  2. Max activation Function "Max"

  3. Amplification factor "Amp"

  4. Threshold Value "t"


So many thing still unclear in start. After research I will understand all elements.

Activity "A" start from 0 to 1. Max activation function "Max" range 0 to Max. In some cases Max will be 1. Amplification factor "Amp" = Vout/ Vin and threshold from 0 to 1. 0.5 consider optimal threshold value.

Output Formula:

O = min(Max, Amp * A) ------------------------------------------------------------------------------ (1)

According to him neurons are four types,

  • Activator

  • Inhibitor

  • Associate

  • Disassociate

Activator and inhibitor neurons take part in creating, remove, change and activate their others neurons links. Inhibitor neurons are opposite. Activator are called excitatory.

Let assume a neuron "i" is an activator neuron and it will covey input after processing to neuron j for further processing then v[i,j] = O[i] * w[i,j]. Here O[i] is an output of neuron i and w(ij) is the weight of both Oi and successor neuron j. In opposite case v[i,j] = - w[i,j] * O[i]. If you want to calculate neuron Aj "Activation" then

A(j) = max(0, Σ v[i,j] - t {i = (0 to n)}) -------------------------------------------------------- (2)

Associate and Disassociate

Association and dissociation depend upon four further variables.

Learning Constant L

Dissociation constant D

Decay constant K

Decay threshold T

Associate Neuron transmit activation value to other neuron j and j itself also activated. Then weight of neuron j and all other activated neurons will be increase and new value of wij will be calculated

w[i,j]{new} = Sqrt(w[i,j]{old} + A[i]*A[j]*A[associate]* w[associate,j] +L))^2 ----------------- (3)

If neuron is not active then

w[i,j]{new} = Sqrt(max(0,w^2[i,j] - K)) --------------------------------------------------------------- (4)


if w[ij] < T then link are weeks, K value is quite small 0.05

Like inhibitory neurons disassociate are opposite of associate neurons.Disassociate neurons weight calculate through given formula


w[i,j]{new} = Sqrt(max(0, w^2[i,j]) - A[i]*A[j]*A[associate]* w[associate,j] -D) ------- (5)

Now you can create a simple feed forward for your agent. If you had an experience on ann then this information provide knowledge about black box.

Have a good day thank you

54 views0 comments
1
2
bottom of page