diff --git a/latex/content.tex b/latex/content.tex index 57611c5..0c4efc2 100644 --- a/latex/content.tex +++ b/latex/content.tex @@ -152,7 +152,12 @@ after each simulation. We set out to study the resource consumption of the algorithms. We implemented the above formulae to calculate the mean and variance of $ N = 10^6 $ random -numbers. We wrote the following algorithms : +numbers. We wrote the following algorithms \footnotemark : + +\footnotetext{The full code used to measure performance can be found in Annex X.} +% TODO annex + +\paragraph{Intuitive algorithm} Store values first, calculate later \begin{lstlisting}[language=python] N = 10**6 @@ -161,9 +166,38 @@ mean = mean(values) variance = variance(values) \end{lstlisting} +Execution time : $ ~ 4.8 $ seconds -% TODO : code -% TODO : add a graph +Memory usage : $ ~ 32 $ MB + +\paragraph{Improved algorithm} Continuous calculation + +\begin{lstlisting}[language=python] +N = 10**6 +Tot = 0 +Tot2 = 0 +for _ in range(N): + item = random() + Tot += item + Tot2 += item ** 2 +mean = Tot / N +variance = Tot2 / (N-1) - mean**2 +\end{lstlisting} + +Execution time : $ ~ 530 $ milliseconds + +Memory usage : $ ~ 1.3 $ kB + +\paragraph{Analysis} Memory usage is, as expected, much lower when calculating +the statistics on the fly. Furthermore, something we hadn't anticipated is the +execution time. The improved algorithm is nearly 10 times faster than the +intuitive one. This can be explained by the time taken to allocate memory and +then calculate the statistics (which iterates multiple times over the array). +\footnotemark + +\footnotetext{Performance was measured on a single computer and will vary + between devices. Execution time and memory usage do not include the import of + libraries.} \subsection{NFBP vs NFDBP}