Azure Cognitive Services review of all day presentations

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

I went ot a day presentation on Microsoft AI cognitive services on saturday (14th dec) and although it started slow, some of the presenters were very interesting. The afternoon sessions weere definitely more interesting than all but one of the morning sessions.

ML build insights

one from Xero, Simo Carryer, who was trying to implement ML in the business and issues/challenges that came up with good insights into the actual process of building ML features into websites.

One particular inside was to have a base test on your data that you are going to learn your tool on. He says, if you didn’t have the ML tool, what would be your best guess? Say taking the average of the data. This you can then use your ML tool to test against, and if it is no better, then back to the drawing board. As its saying you can guess as well as it can already.

Also, can you do an ML Tool that uses offline data to do predictions the next day, or does it have to be instantaneous and on-line? If on-line then has to analyse less data, so prediction not as good, whereas if off-line, then can run training overnight for resulting algorithm for the next day.

Machine Learning Studio

Actually I faded out of this because it was the organizer doing it via web. I switch off with her presentations, she assumes you a bit dumb and I do not enjoy her presentation style. But the tool is good in that you can do training on your data. This video seems to be quite helpful:

I was trying to look at the data once I’d got a template experiment, but couldn’t , so the video was useful. I think what is interested is that you can implement your tool once you’ve trained it.

To view your data at the beginning you can by clicking the number at the bottom of incomming data, but you need a notebook to view in the middle (so have to make somewhere as this is in R or Python I think) but at end you can plug in another module to convert to CSV, but you then have to run it all again.

This video tutorial I followed but still cannot get something to embed in a webbpage to have a valid tool:

After creating the ML trained set and getting a pop up to test the data, I couldn’t get Postman to get the data. About 4 hours tryng to find a method (PHP & JavaScript) to put on a web page to make a call. Unfortunately the endpoints don’t accept the call. Even after going into the web developer, looking at the code for network and editing what was sent, copying it to Postman still with no success.

So next time I see a bullshit presentation that makes it look easy I’ll eye it with deep suspicion.

Face recognition and sentiment app with Azure cognitive services face

The first afternoon demo was on mobile apps with cognitive services such as face, sentiment analysis to see what emotion people are showing and custom vision (you can train it on your own images- I suggested to Zoe that she could use a tool like this for her business). It could then take the image and do a comparison to see which classification it belonged to. It used the Dog type demonstration with about 30 images of each type of dog for training, then he uploaded a testing image and it worked.

The interesting thing about this was that after creating an account and spinning up a service (Azure does this in background) you then just make GET & POST requests. So he was demoing using postman to test this with specific URL & key for service.

I am interested to see if this can be used with GlideApps, if its only making a call to a API then this could be embedded in a button, you could upload an image and let it do the testing. Also for a translation app too, speech to text- translate text- text to speach. Too many steps and maybe expensive for all those services. Am still waiting to setup test account, this service is down at present with MicroSoft. See later post. Its not the case. I think I’d need to build an Azure run website then it’ll work

End comment

After a day of buggering around with these features, Cognative Services Computer Vision and ML labs I couldn’t host either on GlideApps or website. So an exclusivity to keep it all in Azure ecoSystem. That is fine if you want to sit in tyhat ecosystem, but I don’t. I want the flexibility to be able to use other services as well, that is why Docker is interesting.

That is fine in its way, but they shouldn’t purport to making it an open system.

I like the idea of what they are doing, I’m unsure what the free tier provides and what I have to pay for. I like playing in the Free Tier environment to see the capabilities of what these tools can do.

Overall, I’m not sure I’d recommend the Cognitive Services, a bit limited.

I like the machine learning labs and potential for setting up your own ML solutions on websites. I woul have liked to play with that interface.

Overall, getting people hooked into AWS, Google Cloud, MS Azure they are all after your money. AWS seems to be the allowing for the most open for experimentation on the free tier. I’m not that familiar with Azure, but its not that transparent. Maybe another trip to AWS and Google to see their ML cloud offerings. They cant be any less friendly than Azure.

On a very positive note, I have been to 2 all day MS events, one on IoT ad one on AI. These occur in Wellington, and are free, so better than AWS and Google. I have had some product learning with MS tools and that I do appreciate, so thankyou MS.

Further research shows that another ML way is to use Python and Flask. Flask is a micro web application, from what I can figure out, that allows to to make API calls to it to get the data. So you can use Python to do ML on dataset then push to Flask, then make API calls from web page to Flask. Notionally that would mean running Python & Flask on my VPS all the time. Maybe worth exploring.