Your experience on this site will be improved by allowing cookies
Automated Machine Learning, or AutoML, is one of the latest trends that is driving the democratization of data science. A huge part of a data scientist’s job is spent on data cleansing and preparation, and each of these tasks are repetitive and time-consuming. AutoML ensures that these tasks are automated, and it involves building models, creating algorithms and neural networks.
AutoML is essentially the process of applying ML models to real-world issues by leveraging automation. AutoML frameworks help data scientists in data visualization, model intelligibility and model deployment. The main innovation in it is hyperparameters search, utilized for preprocessing components, model type selection, and for optimizing their hyperparameters.
Automated Machine Learning provides methods and processes to make Machine Learning available for non-Machine Learning experts, to improve efficiency of Machine Learning and to accelerate research on Machine Learning.
Machine learning (ML) has achieved considerable successes in recent years and an ever-growing number of disciplines rely on it. However, this success crucially relies on human machine learning experts to perform the following tasks:
As the complexity of these tasks is often beyond non-ML-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML.
Making a science of model search argues that the performance of a given technique depends on both the fundamental quality of the algorithm and the details of its tuning and that it is sometimes difficult to know whether a given technique is genuinely better, or simply better tuned. To improve the situation, Bergstra et al. proposed reporting results obtained by tuning all algorithms with the same hyperparameter optimization toolkit. Sculley et al.’s ICLR’18 workshop paper Winner’s Curse argues in the same direction and gives recent examples in which correct hyperperameter optimization of baselines improved over the latest state-of-the-art results and newly proposed methods.
Hyperparameter optimization and algorithm configuration provide methods to automate the tedious, time-consuming and error-prone process of tuning hyperparameters to new tasks at hand. We for example provide packages for hyperparameter optimization:
2 comments
Administration
555
Administration
555