Cognitive Programming


 In the recent past, I've been thinking a lot about the cognitive behavior of the organization. Thus, the concept evolved. I wanted to write about the cognitive sense of the organization and the agility direction of the institution. Before proceeding to the level of the organization, I felt that it is first important to understand the direction of the various company's investment in this direction. In this article, it is a quick glance of the cognitive computing services by industry leaders.

Abstract


Cognitive, ˈkɒɡnɪtɪvˈ, is derived from the word, cognition. Cognate means being relative and continuity is the cognition. As humans, we have an added advantage of self-learning. Although, the concept of learning is used as a synonym for understanding (or) interpreting by individual thru perception, verbal, semantic or syntactical procedure. On the other side, machines, being brain less, need program (or) instruction towards learning. Thus, the need of cognitive programming.

Jean Piaget, is the popular psychologist known for his theory on cognitive development. His theory is based on the learning principles of kids, though. The same is suitable for all programming the machines as well. This can be considered as the 3rd generation of computing. In the beginning we have abacus model with the help of tabular systems. Accounting is the best example of this model. Then came the programming to automate these models for a shorter and quicker intervals. All programming languages fall under the 2nd generation of the computing. Now that the maturity of this model is at a stable state, it is time to induce the intelligence to these programming models, such that, they become suitable for self-awareness and knowing becomes a practical approach for learning on the happening around the past and present. Hence, cognitive development is an approach for programming these machines.

Problem


Two decades back, storing the data was a challenge, but, not now. With the advent technology, within the current world, we can store a TB of data in a one centimeter hologram. It is not far from now that the world will see Zettabyte as well as Yottabyte sized data. The problem is not in storing the data, but, in retrieving the same within a given interval. It is calculated by the scientists that an average brain of human can store 3 terabytes of information. It is understood that anything between 1 to 10 TBs of data can be accommodated. With such high quantity of data, it is found that most of the humans find challenge in recollecting the past situations at any given situation.
 
The processing power of the human starts after the ½ second on instruction. For example, we met a person and we wanted to recollect who that person is, when we have last met, what are the details of the person, etc., When the brain gets instructions to recollect, it starts functioning only after 300 mills seconds to 500 milli seconds, which is about ½ of a second. It is takes less than 50 milli seconds for a machine.

But, it is surprised to see that both human and machine struggle to retrieve data from the archived sources between 2 seconds to 5 seconds. The battle is won by machine after 5 seconds, whereas humans, if fail to recollect the information with 5 seconds, they will continue to fail until 17 mins, at times, they will never as well. For humans, it is proved that if they don’t recollect within 5 seconds, their failure rate in recollecting will deteriorate with a faster pace and then their brains ends in dead lock. The data in the current world is doubling for every 12 to 18 months. And the truth is that this data is unstructured.

So the need of the current situation is the need of the better technologies and matured systems to adapt the changes that are not programmed and result with better actionable decisions. Thus, the need of the situation is to have a nature relation between the human and machine within any given domain and be able to advice services in that domain. This system must be in a position to motivate on own performance and relates other same situations. This help to acquire the data for difference situations and generate the probabilistic memory solution hidden from the unstructured knowledge.

Most of the industry leaders are now providing their research on this field by means of APIs and giving access to their neuromorphic architectures. These APIs will help various industry leaders to take advantage of the highly complex neuron model based systems to predict their business behavior. These APIs also help as a functional equivalence to natural cognitive process with the training provided from various consumers. The learnings from the consumers will be useful with wide sharing, thus, one industry adaption of these APIs will help the other industry as well.

The entire foundation for this kind of programming should be the deduction of the information, rather than the deterministic system. This can only be achieved if the reflection of the data is inferenced with experience based learning. The entire process should be based on the nature language interaction with the perception process. These technology programming models should be self-reinforced systems. It is too hard to predict the natural and intuitive behavior towards each problem or situation. It needs deep reasoning and judge the reasoning with the evidence that is found from the augmented data along with the augment of human perception.

It is all about making patterns and port them for the given situation. The challenge in these programming models is sequencing the natural processing in the context of the collaboration human with advanced machinery. Personification of the business from the context of the user is the success of any cognitive programming. The learning of these machines would be obsolete over the period of 5 to 8 years. Thus, the shifting of the learnings of the archaic machines has to be shifted to newer infrastructure. Not only should the hardware of the future systems change, but the software as well. It is also a challenge towards this technology shift is about the privacy of individual context is applied on the similar other contexts. The context of perceptual intelligence is the way to populate the data from the unstructured source to a variety of presentation mechanisms.

Industry leaders


Microsoft Cortana


Microsoft’s cognitive computing services are formerly known as project Oxford. They named these services as Cortana and is reachable at https://www.microsoft.com/cognitive-services. Microsoft Cognitive Services expands on Microsoft’s evolving portfolio of machine learning APIs and enables developers to easily add intelligent features – such as emotion and video detection; facial, speech and vision recognition; and speech and language understanding – into their applications. Cortana’s vision is for more personal computing experiences and enhanced productivity aided by systems that increasingly can see, hear, speak, understand and even begin to reason.
 
Cortana comes with natural language process with multi model interfaces. It serves with all 3 styles of interaction with human. It supports speech recognition, vision recognition as well as language understanding with grammar to colloquial. Its neural networks can learn through training and can identify difficulties.
 
 
 

IBM Watson


IBM is one of the industry leaders in the fields of cognitive computing. http://www.ibm.com/watson

IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. Watson is initially put into learning mode with various phases. In the beginning of inducing the cognitive intelligence in any field, Watson is betrothed with the digital data in all varieties. The second phase is to sanitize the data with a human interacted language, in the form of spoken English. This phase is put into practice with a means of questions and answers. During this phase, Watson learns the jargon as well as the pattern of the questions.

As this is progressing in conjunction with the digitization of the data, there is a need for proofing the data that is accumulated from voice as well as text. At this juncture, Human intervention is highly important and the brightest mind helps Watson to distinguish the unwanted information as well as highly classified points.

IBM states that the era of computing has evolved from tabulating systems of 1900s to Programmable systems era by 1950s. Later that, all the advancement of the machinery made lot of changes into the programmable system and finally IBM could reach to the current state as cognitive systems with the result of Watson. After this, they envision to cognitive computing (or) brain cube as anticipated by 2020

Google’s Deepmind


Google’s acquisition of DeepMind’s AI platform, http://www.deepmind.com, it made history as first computer program to beat a professional player at the game of GO, which is a googol times more complex than chess. Demis Hassabis, CEO of Deepmind, criticizes that the other vendors in this field are just as a ‘Narrow’ AI, Narrow Artificial Intelligence and they are designed for one and only one purpose. Whereas, Deepmind’s vision is to build an AGI, Artificial ‘General’ Intelligence. Demis also thinks that, their vision is to convert the unstructured information into actionable knowledge. Their vision is to build machines such that they become general purpose learning machines. These systems should learn automatically from raw inputs and are not pre-programmed. These systems are made for general, such that, the same system can operate across of a wide range of tasks.
 

Conclusion


Understanding the systems neuroscience and making the machine to adapt (or) learn from the data feed, it if highly difficult to have a set of programming lines to remain to function as time progresses. They have to learn directly from their experiences and without any retraining.


---------------------------------------
This is just out of my collection, if you have any objections, drop me a mail at dskcheck@gmail.com