[Matlab] Regression with Boosted Decision Trees
For each subject I have a feature vector including 144 features. Udemy is an American online learning platform aimed at professional adults and students, developed in May 2020. This video shows how this compiler will allow the compilation and usage of s-functions wi. The reason why this weaker learner is used is that this is the one of simplest learner that works for both discrete and continues data. In Boosting one generally creates a classifier with accuracy on the Terms: Matlab, source, code, gender, recognition, identification, adaboost. Demonstration of code that gives an output of how a disease will effect a population over time using an SIR model with given user parameters. GitHub is home to over 31 million developers working together to host and review code.
Speeding up AdaBoost for a real-time application
So I'm working on implementing an object detection algorithm from some recent computer vision papers and right now I've got all my feature detection working at real-time rates (around 50 FPS) in python/cython, but am not trying to do classification and it is incredibly slow using sklearn.
The classifier I'm using is AdaBoost with 256 decision trees, each sliding window in my frame has 5,120 features (128 x 64 frames, 10 features per pixel, and 4x4 pooling). Because it is sliding window, for a 640x480 frame with a stride of 6 (what the paper uses) I get a few hundred windows per frame.
I fit my initial randomized AdaBoost data set, but when I predict it is taking me about .22 seconds to make predictions in sklearn, so around 4 frame per second. I'm wondering if anyone has experience with ways to either speed up this prediction time or if it is worth implementing AdaBoost from scratch in Cython. So far I've had a lot of luck implementing otherwise slow sections of the aglorthim in Cython, so that has been my go-to approach for optimization.
For reference the paper I am basing my work on is this: http://authors.library.caltech.edu/49239/7/DollarPAMI14pyramids_0.pdf
I looked at their code, but it is really difficult to parse and is a mix of MATLAB/C++, which doesn't translate well to my application. It seems like they implemented their own version of AdaBoost, but it is quite difficult to interpret exactly what is going on as they don't have a lot of documentation for the back-end of the processing. Any help would be greatly appreciated!
Edit: I should add that, in looking at the literature, the thing that surprises me is that most people seem to describe the bottleneck as feature extraction. I have that running at around 75 FPS on its own for 640x480 images, however sklearn is really clamping everything. I was surprised by this because I thought most of their underlying implementation was in Cython already.
submitted by NAOorNever