OK, so what is an activation function? If you don’t know, let me give you a bookish definition.
An activation function is a mathematical function used in a neural network that activates the neurons and introduce non-linearity by transformation of the inputs. They are also known as Transfer Functions.
Now, we all know such definitions, and apply the activation functions in our daily deep learning problems. But, here in this article I will try to answer most of the doubts we have regarding activation functions with very minimal math and lots of intuition.
In the era of Machine Learning, Data is the new oil that we all know. We are also aware that gathering data is a non-trivial task. Even though we accomplish gathering data, a huge portion of data is unlabelled for the machine learning task (supervised learning). Annotating/Labeling data require huge manual efforts involving huge costs and time. In this article, I will show you how Active Learning can help us in solving the data labeling problem.
Active Learning is a method by which the learning algorithm inspects all the data-points and concludes by selecting a few data-points on which the…
If you face a situation where you built two classifiers A and B. You have chosen Accuracy as the performance metric. You got a conclusion that model A has accuracy of 88% and confidence of 89% in each prediction. On the contrary model B gives accuracy of 88% but confidence of 95% in each prediction. In such a case which model would you deploy in production?
Typically everyone might think model B is better. Here I want to argue that model A is better. The reason for that is, model A is good in self assessment. Model A thinks that…
So, many of us are either a machine learning engineer or an enthusiast or somehow related to the field of computer science. As we know the field of AI is growing each and every day along with the curiosity of engineers to implement them. The usage of AI is a blessing to all of the worlds, but obviously with a cost, maybe I would say one of the major costs that we shall pay, not now but in the near future — ‘Carbon Footprints’.
It is very evident that if we are using computers that means we need computing power…
If you are interested in Natural Language Processing then Language Models are very crucial and important part of NLP that you need to understand.
Before the advancement of machine learning, specifically deep learning, language models were built based on different statistical approaches like : the n-gram based language model, hidden markov model, rule based NLP etc.
Language models are so useful now a days that most of the big tech companies and their highly valuable products are using it extensively. Google Assistant , Apple Siri, Microsoft Cortana, Amazon Alexa all are using language models in different purposes.
Here in this…
TSNE is considered as state of the art in the area of Dimensionality Reduction (specifically for the visualization of very high dimensional data). Although there are many techniques available to reduce high dimensional data (e.g. PCA), TSNE is considered one of the best techniques available, which was the new area of the research and published the official paper in 2008. There is a beautiful website by the researcher Laurens van der Maaten himself about TSNE details.
Here in this blog, I will cover below points
1. Limitation of PCA
2. How TSNE works (Geometrically)
3. A good example of TSNE to follow
Machine learning Engineer