# Concepts and Applications of Deep Learning

1704 words (7 pages) Essay

9th Apr 2018 Computer Science Reference this

**Disclaimer:** This work has been submitted by a university student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

**Abstract:**

Since 2006, Deep Learning, also known as Hierarchal Leaning has been evolved as a new field of Machine Learning Research. The deep learning model deals with problems on which shallow architectures (e.g. Regression) are affected by the curse of dimensionality. As part of a two-stage learning scheme involving multiple layers of nonlinear processing a set of statistically robust features is automatically extracted from the data. The present tutorial introducing the deep learning special session details the state-of-the-art models and summarizes the current understanding of this learning approach which is a reference for many difficult classification tasks. Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence. Deep Learning is about learning multiple levels of representation and abstraction that help to make sense of data such as images, sound, and text.

**Introduction:**

Just consider we have to identify someone’s handwriting. The people have different ways of writing, for example, the numbers-Whether they write a ‘7’ or a ‘9’. We know that if there is a close loop on the top of the vertical line then we named it as ‘9’ and if it contains a horizontal line instead of loop then we think it is ‘7’. The thing we used for exact recognition of digit is a smart display of setting smaller features together to make the whole – detecting distinguished edges to make lines, observing a horizontal vs. vertical line, seeing the positioning of the vertical section under the horizontal section, detecting a loop in the horizontal section, etc.

The idea of the deep learning is the same: find out multiple levels of features that work jointly to define increasingly more abstract aspects of the data.

So, Deep Learning is defined as follows:

“A sub-field of machine learning that is based on learning several levels of representations, corresponding to a hierarchy of features or factors or concepts, where higher-level concepts are defined from lower-level ones, and the same lower-level concepts can help to define many higher-level concepts. Deep learning is part of a broader family of machine learning methods based on learning representations. An observation (e.g., an image) can be represented in many ways (e.g., a vector of pixels), but some representations make it easier to learn tasks of interest (e.g., is this the image of a human face?) from examples, and research in this area attempts to define what makes better representations and how to learn them.” see Wikipedia on “Deep Learning” as of this writing in February 2013; see http://en.wikipedia.org/wiki/Deep_learning__.__

If you need assistance with writing your essay, our professional essay writing service is here to help!

Find out moreThe performance of recent machine learning algorithms relies greatly on the particular features of the input data. As for example marking emails as spam or not spam, can be performed by breaking down the input document intowords. Selecting the exact feature representation of input data, or feature engineering, is a technique that people can recall previous knowledge of an area to enhance an algorithm’s computational performance and accuracy. Moving towards general artificial intelligence, algorithms need to be less dependent on this feature engineering and better learn to classify the descriptive factors of input data on their own.

Deep learning approaches is useful among many domains: it has had great commercial success powering most of Google and Microsoft’s current speech recognition, digital image processing, natural language processing, object recognition, etc. Facebook is also planning on using deep learning approaches to understand its users.

How to build a deep representation of input data? The main idea is to learn a hierarchy of features one level at a time where the input to one computational level is the output of the previous level for an arbitrary number of levels. Otherwise, ‘shallow’ representations (most current algorithms like regression) go directly from input data to output classification.

**Inspirations for Deep Architectures**

The main inspirations for studying learning algorithms for deep architectures are the following:

**The brain has a deep architecture**

The visual cortex is considered and demonstrates an order of regions all of them have a representation of the input, and signals move from one to the next. In case there are also miss connections and at some level parallel paths, so the picture is more complicated). Each level of this feature hierarchy represents the input at a different level of concept, with more abstract features further up in the hierarchy, defined in terms of the lower-level ones.

Note that representations in the brain are in between dense distributed and purely local: they arelight: about 1% of neurons are active concurrently in the brain. Given the vast number of neurons, this is still a very efficient (exponentially efficient) representation.

**Cognitive processes seem deep**

- Humans organize their ideas and concepts hierarchically.
- Humans first learn simpler concepts and then compose them to represent more abstract ones.
- Engineers break-up solutions into multiple levels of abstraction and processing.

Introspection of linguistically expressible concepts also suggests alightrepresentation: only a small fraction of all possible words/concepts are applicable to a particular input (say a visual scene).

One good analogue for deep representations is neurons in the brain (a motivation for ANN) – the output of a group of neurons is given as the input to more neurons to form a hierarchical layer structure. Each layer*N*is composed of*h* computational nodes that connect to each computational node in layer*N+1*. See the image below for an example:

**Related Work:**

Historically, the concept of deep learning was originated from artificial neural network research. (Hence, one may occasionally hear the discussion of “new-generation neural networks”.) Feed-forward neural networks or MLPs with many hidden layers, which are often referred to as deep neural networks (DNNs), are good examples of the models with a deep architecture. Back-propagation (BP), popularized in 1980’s, has been a well-known algorithm for learning the parameters of these networks. Unfortunately back-propagation alone did not work well in practice then for learning networks with more than a small number of hidden layers (see a review and analysis in (Bengio, 2009; Glorot and Bengio, 2010). The pervasive presence of local optima in the non-convex objective function of the deep networks is the main source of difficulties in the learning. Back-propagation is based on local gradient descent, and starts usually at some random initial points. It often gets trapped in poor local optima when the batch-mode BP algorithm is used, and the severity increases significantly as the depth of the networks increases. This difficulty is partially responsible for steering away most of the machine learning and signal processing research from neural networks to shallow models that have convex loss functions (e.g., SVMs, CRFs, and MaxEnt models), for which global optimum can be efficiently obtained at the cost of less modeling power.

T**he applicative domains for deep learning:**

- In natural language processing, a very interesting approach gives a proof that deep architectures can perform multi-task learning, giving state-of-the-art results on difficult tasks like semantic role labeling. Deep architectures can also be applied to regression with Gaussian processes [37] and time series prediction.
- Another interesting application area is highly nonlinear data compression. To reduce the dimensionality of an input instance, it is sufficient for a deep architecture that the number of units in its last layer is smaller than its input dimensionality.
- Moreover, adding layers to a neural network can lead to learning more abstract features, from which input instances can be coded with high accuracy in a more compact form.
- Reducing the dimensionality of data has been presented as one of the first application of deep learning.
- This approach is very efficient to perform semantic hashing on text documents, where the codes generated by the deepest layer are used to build a hash table from a set of documents.
- A similar approach for a large scale image database is presented in this special session.

**Conclusion:**

**Abstract:**

Since 2006, Deep Learning, also known as Hierarchal Leaning has been evolved as a new field of Machine Learning Research. The deep learning model deals with problems on which shallow architectures (e.g. Regression) are affected by the curse of dimensionality. As part of a two-stage learning scheme involving multiple layers of nonlinear processing a set of statistically robust features is automatically extracted from the data. The present tutorial introducing the deep learning special session details the state-of-the-art models and summarizes the current understanding of this learning approach which is a reference for many difficult classification tasks. Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence. Deep Learning is about learning multiple levels of representation and abstraction that help to make sense of data such as images, sound, and text.

**Introduction:**

Just consider we have to identify someone’s handwriting. The people have different ways of writing, for example, the numbers-Whether they write a ‘7’ or a ‘9’. We know that if there is a close loop on the top of the vertical line then we named it as ‘9’ and if it contains a horizontal line instead of loop then we think it is ‘7’. The thing we used for exact recognition of digit is a smart display of setting smaller features together to make the whole – detecting distinguished edges to make lines, observing a horizontal vs. vertical line, seeing the positioning of the vertical section under the horizontal section, detecting a loop in the horizontal section, etc.

The idea of the deep learning is the same: find out multiple levels of features that work jointly to define increasingly more abstract aspects of the data.

So, Deep Learning is defined as follows:

“A sub-field of machine learning that is based on learning several levels of representations, corresponding to a hierarchy of features or factors or concepts, where higher-level concepts are defined from lower-level ones, and the same lower-level concepts can help to define many higher-level concepts. Deep learning is part of a broader family of machine learning methods based on learning representations. An observation (e.g., an image) can be represented in many ways (e.g., a vector of pixels), but some representations make it easier to learn tasks of interest (e.g., is this the image of a human face?) from examples, and research in this area attempts to define what makes better representations and how to learn them.” see Wikipedia on “Deep Learning” as of this writing in February 2013; see http://en.wikipedia.org/wiki/Deep_learning__.__

The performance of recent machine learning algorithms relies greatly on the particular features of the input data. As for example marking emails as spam or not spam, can be performed by breaking down the input document intowords. Selecting the exact feature representation of input data, or feature engineering, is a technique that people can recall previous knowledge of an area to enhance an algorithm’s computational performance and accuracy. Moving towards general artificial intelligence, algorithms need to be less dependent on this feature engineering and better learn to classify the descriptive factors of input data on their own.

Deep learning approaches is useful among many domains: it has had great commercial success powering most of Google and Microsoft’s current speech recognition, digital image processing, natural language processing, object recognition, etc. Facebook is also planning on using deep learning approaches to understand its users.

How to build a deep representation of input data? The main idea is to learn a hierarchy of features one level at a time where the input to one computational level is the output of the previous level for an arbitrary number of levels. Otherwise, ‘shallow’ representations (most current algorithms like regression) go directly from input data to output classification.

**Inspirations for Deep Architectures**

The main inspirations for studying learning algorithms for deep architectures are the following:

**The brain has a deep architecture**

The visual cortex is considered and demonstrates an order of regions all of them have a representation of the input, and signals move from one to the next. In case there are also miss connections and at some level parallel paths, so the picture is more complicated). Each level of this feature hierarchy represents the input at a different level of concept, with more abstract features further up in the hierarchy, defined in terms of the lower-level ones.

Note that representations in the brain are in between dense distributed and purely local: they arelight: about 1% of neurons are active concurrently in the brain. Given the vast number of neurons, this is still a very efficient (exponentially efficient) representation.

**Cognitive processes seem deep**

- Humans organize their ideas and concepts hierarchically.
- Humans first learn simpler concepts and then compose them to represent more abstract ones.
- Engineers break-up solutions into multiple levels of abstraction and processing.

Introspection of linguistically expressible concepts also suggests alightrepresentation: only a small fraction of all possible words/concepts are applicable to a particular input (say a visual scene).

One good analogue for deep representations is neurons in the brain (a motivation for ANN) – the output of a group of neurons is given as the input to more neurons to form a hierarchical layer structure. Each layer*N*is composed of*h* computational nodes that connect to each computational node in layer*N+1*. See the image below for an example:

**Related Work:**

Historically, the concept of deep learning was originated from artificial neural network research. (Hence, one may occasionally hear the discussion of “new-generation neural networks”.) Feed-forward neural networks or MLPs with many hidden layers, which are often referred to as deep neural networks (DNNs), are good examples of the models with a deep architecture. Back-propagation (BP), popularized in 1980’s, has been a well-known algorithm for learning the parameters of these networks. Unfortunately back-propagation alone did not work well in practice then for learning networks with more than a small number of hidden layers (see a review and analysis in (Bengio, 2009; Glorot and Bengio, 2010). The pervasive presence of local optima in the non-convex objective function of the deep networks is the main source of difficulties in the learning. Back-propagation is based on local gradient descent, and starts usually at some random initial points. It often gets trapped in poor local optima when the batch-mode BP algorithm is used, and the severity increases significantly as the depth of the networks increases. This difficulty is partially responsible for steering away most of the machine learning and signal processing research from neural networks to shallow models that have convex loss functions (e.g., SVMs, CRFs, and MaxEnt models), for which global optimum can be efficiently obtained at the cost of less modeling power.

T**he applicative domains for deep learning:**

- In natural language processing, a very interesting approach gives a proof that deep architectures can perform multi-task learning, giving state-of-the-art results on difficult tasks like semantic role labeling. Deep architectures can also be applied to regression with Gaussian processes [37] and time series prediction.
- Another interesting application area is highly nonlinear data compression. To reduce the dimensionality of an input instance, it is sufficient for a deep architecture that the number of units in its last layer is smaller than its input dimensionality.
- Moreover, adding layers to a neural network can lead to learning more abstract features, from which input instances can be coded with high accuracy in a more compact form.
- Reducing the dimensionality of data has been presented as one of the first application of deep learning.
- This approach is very efficient to perform semantic hashing on text documents, where the codes generated by the deepest layer are used to build a hash table from a set of documents.
- A similar approach for a large scale image database is presented in this special session.

**Conclusion:**

- Deep learning is about creating an abstract hierarchical representation of the input data to create useful features for traditional machine learning algorithms. Each layer in the hierarchy learns a more abstract and complex feature of the data, such as edges to eyes to faces.
- This representation gets its power of abstraction by stacking nonlinear functions, where the output of one layer becomes the input to the next.
- The two main schools of thought for analyzing deep architectures areprobabilisticvs.direct encoding.
- The probabilistic interpretation means that each layer defines a distribution of hidden units given the observed input,P(h|x).
- The direct encoding interpretation learns two separate functions – theencoderanddecoder- to transform the observed input to the feature space and then back to the observed space.
- These architectures have had great commercial success so far, powering many natural language processing and image recognition tasks at companies like Google and Microsoft.

#### Cite This Work

To export a reference to this article please select a referencing stye below:

## Related Services

View all### DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please: