tex: NFBP
This commit is contained in:
parent
29851204fe
commit
4c74dd7877
1 changed files with 23 additions and 1 deletions
|
@ -104,12 +104,34 @@ conduct our own analysis and study various algorithms and their probabilistic
|
|||
advantages, focusing on one-dimensional bin packing, where we try to store
|
||||
items of different heights in a linear bin.
|
||||
|
||||
\section{Next Fit Bin Packing algorithm}
|
||||
\section{Next Fit Bin Packing algorithm (NFBP)}
|
||||
|
||||
Our goal is to study the number of bins $ H_n $ required to store $ n $ items
|
||||
for each algorithm. We first consider the Next Fit Bin Packing algorithm, where
|
||||
we store each item in the next bin if it fits, otherwise we open a new bin.
|
||||
|
||||
\paragraph{} Each bin will have a fixed capacity of $ 1 $ and items and items
|
||||
will be of random sizes between $ 0 $ and $ 1 $. We will run X simulations % TODO
|
||||
with 10 packets.
|
||||
|
||||
\subsubsection{Variables used in models}
|
||||
|
||||
|
||||
\subsubsection{Complexity and implementation optimization}
|
||||
|
||||
The NFBP algorithm has a linear complexity $ O(n) $, as we only need to iterate
|
||||
over the items once.
|
||||
|
||||
When implementing the statistical analysis, the intuitive way to do it is to
|
||||
run $ R $ simulations, store the results, then conduct the analysis. However,
|
||||
when running a large number of simulations, this can be very memory
|
||||
consuming. We can optimize the process by computing the statistics on the fly,
|
||||
by using sum formulae. This uses nearly constant memory, as we only need to
|
||||
store the current sum and the current sum of squares for different variables.
|
||||
|
||||
% TODO : code
|
||||
% TODO : move this somewhere else ?
|
||||
% TODO : add a graph
|
||||
|
||||
\cite{hofri:1987}
|
||||
% TODO mettre de l'Histoire
|
||||
|
|
Loading…
Reference in a new issue