![descent 2 kampagne descent 2 kampagne](https://media.moddb.com/images/games/1/17/16384/freespace-box.jpg)
![descent 2 kampagne descent 2 kampagne](https://images.squarespace-cdn.com/content/v1/605c75dd46942f6e1805c212/1617702045679-95EK3UXXCFPPY69HPCNL/37A_GanzheitlicheKommunikation_Gesamtkampagne3.jpg)
From that starting point, we will find the derivative (or slope), and from there, we can use a tangent line to observe the steepness of the slope. The starting point is just an arbitrary point for us to evaluate the performance. The gradient descent algorithm behaves similarly, but it is based on a convex function, such as the one below: You may also recall plotting a scatterplot in statistics and finding the line of best fit, which required calculating the error between the actual output and the predicted output (y-hat) using the mean squared error formula. You may recall the following formula for the slope of a line, which is y = mx + b, where m represents the slope and b is the intercept on the y-axis. How does gradient descent work?īefore we dive into gradient descent, it may help to review some concepts from linear regression. Once machine learning models are optimized for accuracy, they can be powerful tools for artificial intelligence (AI) and computer science applications. Until the function is close to or equal to zero, the model will continue to adjust its parameters to yield the smallest possible error. Training data helps these models learn over time, and the cost function within gradient descent specifically acts as a barometer, gauging its accuracy with each iteration of parameter updates. Gradient descent is an optimization algorithm which is commonly-used to train machine learning models and neural networks. Learn about gradient descent, an optimization algorithm used to train machine learning models by minimizing errors between predicted and actual results.