Krishna Narasimhan (TU Darmstadt)
Over the last several decades, software has been woven into the fabric of every aspect of our society. As software development surges and code infrastructure of enterprise applications ages, it is now more critical than ever to increase software development productivity and modernize legacy applications. Advances in deep learning and machine learning algorithms have enabled numerous breakthroughs, motivating researchers to leverage AI techniques to improve software development efficiency. Thus, the fast-emerging research area of AI for Code has garnered new interest and gathered momentum. In this talk, we present a large-scale dataset CodeNet, consisting of over 14 million code samples and about 500 million lines of code in 55 different programming languages, which is aimed at teaching AI to code. In addition to its large scale, CodeNet has a rich set of high-quality annotations to benchmark and help accelerate research in AI techniques for a variety of critical coding tasks, including code similarity and classification, code translation between a large variety of programming languages, and code performance (runtime and memory) improvement techniques. Additionally, CodeNet provides sample input and output test sets for 98.5% of the code samples, which can be used as an oracle for determining code correctness and potentially guide reinforcement learning for code quality improvements. As a usability feature, we provide several pre-processing tools in CodeNet to transform source code into representations that can be readily used as inputs into machine learning models. Results of code classification and code similarity experiments using the CodeNet dataset are provided as a reference. We hope that the scale, diversity and rich, high-quality annotations of CodeNet will offer unprecedented research opportunities at the intersection of AI and Software Engineering.
I am a research scientist at the at the T.J. Watson IBM Research Center in Yorktown Heights, NY, United States. Currently I am working on the design of algorithms and software to accelerate large-scale machine learning and deep learning with big data, the design and optimization of machine learning algorithms, applications and workflows.
I have published in international conferences and journals, I am currently involved in several governmental and academic projects. I have been working on several topics: from the design of algorithms and techniques for NLP and text mining; sentiment analysis and opinion mining of blogs, forum and social networks contents; action recognition in videos; reinforcement learning; analysis and prediction of genomic functions; stock market forecasting; the use of Markov chains for the creation of prediction models; churn analysis and more.
I hold a Ph.D. in Informatics from the University of Bologna – Alma mater studiorum.
I was also the adjunct professor for the course „High Performance Machine Learning“ at the NYU Courant – Computer Science department and at the University of Bologna for the course “Data Intensive Application”.