I really think this discussion in MIT Technology Review is making an important point about lack of transparency in AI code, algorithms, and models. If an organization like Google do not feel able to give any insight into how its systems work, we have a real issue. The critique of a paper published by Google scientists in Nature is a clear challenges to this:
But according to its critics, the Google team provided so little information about its code and how it was tested that the study amounted to nothing more than a promotion of proprietary tech.
Where do we draw the line between legitimate proprietary concerns about competition, and the broader scientific and societal needs for openness to verify such advances?