Each algorithm helps in identifying a set of parameters for reducing a loss function. This happens through the evaluation of parameters against data followed by adjustments. In the case of standard gradient descent (GD), evaluation of all training samples in each set of parameters happens.
So, you will take big yet slower steps towards the solution. The stochastic gradient descent (SGD) involves evaluation of one training sample for a set of parameters before their updates. This seems similar to taking small yet quick steps towards the solution.