tag:blogger.com,1999:blog-6748877443699290050.post6133760668651992036..comments2024-03-27T23:43:31.674-07:00Comments on eMpTy Pages: Artificial Intelligence: And You, How Will You Raise Your AI?Kirti Vasheehttp://www.blogger.com/profile/16795076802721564830noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-6748877443699290050.post-68899941984805920902018-01-06T04:50:43.675-08:002018-01-06T04:50:43.675-08:00Should be right all that.
Recent recognition of 5...Should be right all that. <br />Recent recognition of 50 years in industry: Harvard nlpgroup collaboration with Systran and their open source machine learning protocol OpenNMT. Primed 26 in the most amazing machine learning projects in the world<br />https://medium.mybridge.co/30-amazing-machine-learning-projects-for-the-past-year-v-2018-b853b8621ac7<br />Congrats to teams for believing in. Being exposed to errors and inconsistencies for 50 years, work on them and excel offering to all for free a historically-patented know-how. <br />That's a high-tech partner who keeps on writing history.<br />Can't say anything but thanks.<br />ESAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6748877443699290050.post-35849172098881702652018-01-05T00:13:14.351-08:002018-01-05T00:13:14.351-08:00Excellent. A big bravo to Systran teams believing ...Excellent. A big bravo to Systran teams believing in and re:inveting since 1968. From moon to earth and from earth to mars !Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6748877443699290050.post-51511256514176108222018-01-04T06:49:11.748-08:002018-01-04T06:49:11.748-08:00Machines need to learn from our errors not from ou...Machines need to learn from our errors not from our references. What Jean formalises with the different incremental training cycles, I believe is to trigger knowledge by projecting massively errors and at the same time excluding them in understanding, writing, recognizing and translating. Not only machine learns but also knows how to identify and avoid errors under different input circumstances. The linguistic backbone of the 50-year old translation system SYSTRAN allows the formalization of the linguistic knowledge in parrallel; rendering possible different types of transplants. Statistical backbone can not reach that level. That is why the approach is unique.<br />ElsaAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6748877443699290050.post-90611909650465611902018-01-02T06:21:29.716-08:002018-01-02T06:21:29.716-08:00I beg your pardon, Monsieur Senellart, but AFAIK, ...I beg your pardon, Monsieur Senellart, but AFAIK, the first algorithm that beat humans was the one developed for Deep Blue that beat Kasparov, probably the greatest chess master of all times. Then came that of Watson who beat two human champions at Jeopardy!<br />Maybe you make a distinction between ML and DL, but they're both AI applications.<br />Anyway, I agree with Kirti that, as neural networks improve, technology requires more and more expertise, and in the end MT will be even harder to grasp than it is now. Right now, we cannot say whether a DNN has been learning, what it possibly learns, how and from what, and we can hardly alter the learning pattern exactly because we cannot tell what happens in hidden layers.<br />We are in a grey area, in a dense mist, following what seems to be a light. What if it is not?Luigi Muziihttps://www.blogger.com/profile/11617962606487603486noreply@blogger.com