[ad_1]
It seems that the moments when musicians have learned songs posted on YouTube have disappeared. They no longer need to stretch their ears to hear from the song just the "clean" tool that swallows another. Nowadays, we almost all have a wise assistant who – when we show him what and how – knows everything better than us.
Massachusetts Institute of Technology (MIT) scientists have done something new. The new artificial intelligence project – CSAIL (Laboratory of Computer and Artificial Intelligence) uses deep neural network technology to extract individual tools from music video. At the same time, it can intercept other tools
The network was formed to analyze 60 hours of music videos and can identify more than twenty different tools. It is enough for a user to click on a tool that he wants to isolate from others, and all he does to make artificial intelligence for him. This is a process that usually requires hours of audio recording by trained professionals (eg, forensic scientists). PixelPlayer can display audio, identify specific tools at the pixel level, and extract sounds associated with these tools
MIT scientists say that the intelligent intelligence of CSAIL is still learning and improving. It is still difficult to distinguish similar musical instruments, for example, more breathing instruments in the song. This will be important for remastering older musical recordings when original studio or concert recordings do not exist anymore. Other uses are offered in remixes or as aids for musicians who learn to play certain passages of the song and to cancel the sound of other instruments.
See The artificial intelligence will replace doctors and lawyers. Partly
An interesting use is also in the organization of old songs. In the future, this technology would be able to replace instruments such as electric guitars for acoustics
PixelPlayer uses deep learning methods. He looks for formulas in the data of the neural networks taught on the videos. A neural network analyzes the video on the video, the other analyzes the sound and the third connects specific pixels with specific sounds of different sounds. The system uses self-education, which means that even the scientists at MIT themselves do not understand how their technology works, what they learn and what tools they use for the analysis
Hang Zhao, CSAIL project manager be used in robots. They could better understand the sounds of the environment that emit other objects in their vicinity, such as animals or vehicles.
It is more than certain that artificial intelligence will gradually find its place in our lives. It will be easier for us to work, to enjoy free time and to make ourselves uncomfortable and monotonous. We will be able to do some nice activities. For example, learning to play a musical instrument
Source MIT
Source link