There has been a lot of discussion about machine learning systems possibly sharing the prejudices of their creators. Do you think that can be problematic?
This is related to the creation of training datasets. Since data labelling is still done by humans, in some specific use cases there could be a human bias reflected in the ML model. It is a very interesting topic and there is active research in progress.
What about AutoML performance for DNA and illness predictions?
Currently only AutoML Vision has been released and it is tailored for image classification. New versions will come for other use cases but we don't have any benchmark on DNA/illness prediction at the moment.
What options does the advanced training method allow you to choose?
More training time and hyperparameter tuning.
Ehat kind of data and what features in the data can AutoML handle? (Besides images computrer visiion classification)
AutoML Vision can handle only images. New AutoML versions are coming soon and will handle new data types.
Is it possible to design or tune manually the NN structure in AutoML?
No, the purpose of AutoML is to automate these steps and provide better results than those achieved by humans.
The project is in alpha. When is planned a new release?
The exact date is under NDA.
Does out ML apply also model creation for TF object detection API
Not in the current stage but new features are added constantly.
Una volta creato un modello con AutoML è possibile esportarlo (il modello in sè e/o gli hyperparameter)?
Il modello viene automaticamente esportato su Cloud ML Engine per il serving ma non è al momento ulteriormente ispezionabile/esportabile.
Quale può essere una dimensione ragionevole minima di dataset per avere un proof of concept con AutoML?
Minimo un centinaio di sample per ogni label. Più aumenta la variabilità delle immagini da riconoscere, più è consigliabile un dataset ampio. Mediamente alcune migliaia di samples producono ottimi risultati.