Neural, Symbolic and Connectionism Learning Models

Neural Network Models

In my opinion, neural network models have the potential to become an artificial intelligence that would think like a human. Certainly, it will not be observed in the near future, as there are some obstacles that need to be overcome in order to make such a computer. First, the amount of data needed for implicit learning, which is a characteristic of the human brain, is infinitely large. Teaching a machine how to teach itself how to read using the concept of a back-propagation rule and relying on fuzzy logic is a very simple task compared to other more complex tasks that require taking into account a great number of variants based on associations, implications, and experience that human beings can easily do.

For example, when people face a problem, and they do not know how to solve it, they think of different implications of this problem, use various associations, and rely on a similar experience in order to find a solution, while computers rely only on their computing capabilities (He, Chen, & Yin, 2016). Second, in which I agree with the author, by the time such artificial intelligence is created, humans themselves will already be partially machines with the same capabilities, thereby being always better than machines. Thus, the creation of an AI, if it is possible at all, will be senseless. Simple robots, on the other hand, can be very useful for people (Lefrancois, 2011).

The most effective way to explain the concepts mentioned in this chapter is implementing methods that the author has used; namely by providing vivid examples and situations that practically prove the identified concepts.

Chan, Jaitly, Le, & Vinyals (2016) present a neural speech recognizer called Listen, Attend, and Spell (LAS), which is capable of making transcriptions from speech utterances to letters without different components such as HMM or pronunciation models that common speech recognizers have. The system consists of a speller and a listener. According to the tests, LAS reaches a WER of almost 15% compared to only 8% achieved by other speech recognizers.

This makes LAS the most efficient speech recognizer so far. Although 15% is not much, the Machine Learning industry is developing very fast now, and, in my opinion, this number will soon increase.

Symbolic and Connectionism Models

The symbolic model is based on the assumption that any concept can be represented symbolically and conforms to certain rules. Initially, this model was used in the creation of computers that would surpass the human brain (He et al., 2016).

Connectionist models are based on fuzzy logic that resembles the work of the human brain. They are less predictable but rather good at performing recognition tasks such as telling a poem with expression or recognizing items on pictures (Lefrancois, 2011).

Certainly, I find connectionist models more interesting, as for me, in spite of their abilities being rather limited, it is very interesting to observe the human-like thinking process in machines based on these models. It is difficult to imagine that a machine can look at a picture and provide different variants of what it sees on that picture explaining its point of view, and theoretically, using connectionist models, it is possible to achieve. In my opinion, such machines can soon be created to assist people in robot-like professions.

Townsend, Keedwell, & Galton (2014) analyze neural-symbolic networks whose function is to represent logic programs. The motivation for developing these networks lies in work on a biologically plausible network that represents knowledge in the same way as the human brain. According to the experiments in evolving genomes that turned out to be successful, the development of connections between neurons in neural-symbolic networks will also be successful. In my opinion, the fusion of the biological models and synthetic models will result in a much more efficient hybrid model.

References

Chan, W., Jaitly, N., Le, Q., & Vinyals, O. (2016). Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, 4960-4964.

He, W., Chen, Y., & Yin, Z. (2016). Adaptive neural network control of an uncertain robot with full-state constraints. IEEE Transactions on Cybernetics, 46(3), 620-629.

Lefrancois, G. R. (2011). Theories of human learning: What the professor said. (6th ed.). Belmont, CA: Wadsworth Publishing.

Townsend, J., Keedwell, E., & Galton, A. (2014). Artificial development of biologically plausible neural-symbolic networks. Cognitive Computation, 6(1), 18-34.

Cite this paper

Select style

Reference

StudyCorgi. (2020, November 14). Neural, Symbolic and Connectionism Learning Models. https://studycorgi.com/neural-symbolic-and-connectionism-learning-models/

Work Cited

"Neural, Symbolic and Connectionism Learning Models." StudyCorgi, 14 Nov. 2020, studycorgi.com/neural-symbolic-and-connectionism-learning-models/.

* Hyperlink the URL after pasting it to your document

References

StudyCorgi. (2020) 'Neural, Symbolic and Connectionism Learning Models'. 14 November.

1. StudyCorgi. "Neural, Symbolic and Connectionism Learning Models." November 14, 2020. https://studycorgi.com/neural-symbolic-and-connectionism-learning-models/.


Bibliography


StudyCorgi. "Neural, Symbolic and Connectionism Learning Models." November 14, 2020. https://studycorgi.com/neural-symbolic-and-connectionism-learning-models/.

References

StudyCorgi. 2020. "Neural, Symbolic and Connectionism Learning Models." November 14, 2020. https://studycorgi.com/neural-symbolic-and-connectionism-learning-models/.

This paper, “Neural, Symbolic and Connectionism Learning Models”, was written and voluntary submitted to our free essay database by a straight-A student. Please ensure you properly reference the paper if you're using it to write your assignment.

Before publication, the StudyCorgi editorial team proofread and checked the paper to make sure it meets the highest standards in terms of grammar, punctuation, style, fact accuracy, copyright issues, and inclusive language. Last updated: .

If you are the author of this paper and no longer wish to have it published on StudyCorgi, request the removal. Please use the “Donate your paper” form to submit an essay.