Aimi is a generative AI application for music that make songs that are everlasting: collaborations between artists and audiences that start, but never have to end.
In this TechFirst I chat with Edward Balassanian, the CEO of Aimi, a generative AI for music that musicians can use to create, generate, and even code music … while allowing audiences to add, customize, extend, and personalize the sound. There’s free music, monetization for artists, and over 200 artists onboard who are creating something entirely new in music.
See my story on Forbes here … and check out my interview with Balassanian from 2020 here.
(subscribe to my YouTube channel)
Audio podcast: generative AI for social music creation
Subscribe on your favorite platform …
Key points in this conversation, as generated by GPT-4
- Aimi is a generative AI application for music that enables collaborations between artists and audiences, creating songs that start but never have to end. The application allows musicians to create, generate, and even code music, while also letting audiences add, customize, extend, and personalize the sound. The platform offers free music and provides monetization opportunities for artists. Over 200 artists are currently engaged with Aimi, creating something new in music.
- Generative AI has been around since the 1970s and is currently gaining significant attention. Aimi.ai, founded by Edward Balassanian, has developed a generative music application which has been launched at South By Southwest.
- Aimi’s focus is on enabling creators to leverage its technology to accelerate music creation. The app is particularly aimed at enthusiasts who want to interact with music, providing a light version of music creation.
- Aimi’s AI has been trained to produce music the way a producer does, bypassing copyright issues tied to using other artists’ material for training. It exposes levers to the AI’s music-making process that align with how a producer thinks about creating music, giving artists greater control over the music generation process.
- The platform combines expert systems with machine learning. The expert systems make music similar to how a producer would, while the machine learning models add a twist of the user’s or artist’s preferences to make the music more personalized, genre-specific, or artist-specific.
- Aimi has developed a programming language called Aimi Script, a full-featured generative music programming language, which artists can use to write elaborate programs for making music.
- For ease of use, Aimi has distilled the expert techniques of close to two hundred artists into algorithms that new artists can simply drag and drop. This allows artists to shape a multi-dimensional musical space, with Aimi picking different algorithms and loops to combine at run time.
Transcript: Chatting with Aimi CEO Edward Balassanian
Note: this is AI-generated and lightly edited. It may contain errors.
Generative Ai is nothing new. In fact, according to an Intel exec I was speaking with about a week ago, it’s been around since nineteen seventies, believe it or not, but obviously it’s kind of having a moment right now …
OpenAI, ChatGPT … all of that stuff and so many competitors joining in, like Google. Other things going on today we’re chatting with kind of an OG of generative AI … maybe not quite that OG.
His name is Edward Balassanian, and he’s a founder and CEO of Aimi.ai. And they have a very cool music application that we’re going to hear a little bit more about and they’re launching formally right now at South By Southwest.
Thank you. thank you for having me,
Hey, super pumped to have you, For those who aren’t familiar, we chatted … it’s got to be half a year ago, maybe a year ago about Aimi. For those who aren’t familiar, what is Aimi?
Aimi is a generative music platform and our focus is on enabling creators to leverage the technology that we’ve built to accelerate their battle to create music. and the app that I will show you today is part of that platform. More focused on enthusiasts who want to interact with music. kind of a light version of creating. if you will.
It’s quite interesting to me and I’ve used it on and off since we’ve chatted actually, And what I’ve used it for. What’s been really helpful for me is just focus music. Like I’m working. I need something going on, but I don’t want something that’s too intrusive into my environment and it just auto generates with help we’ll get into that some cool music that works. But you’ve got various different genres And you’ve got artists involved in the process. Talk about that.
One of the challenges — I’ll give some context first — one of the challenges which has appeared is lawsuits that artists … creators feeling like, Hey, that’s mine. You used my material to train your system so that you can give away something for free that I would have been able to provide.
Talk about that and how it works. How you’re working with creators and Aimi,
Well, you bring up a really good point and I think the thing to emphasize there is that we’re not trying to train our AI by having it listen to music to try to copy that music. We think the best you’re going to do is create just a cheaper version of that music. And the world has enough cheap music so we don’t need more cheap music out there.
The difference between that kind of training and what we’re doing is we’re training Aimi to produce music the way a producer does. So we kind of bypass the whole problem. We’re teaching Aimi to make music the way a producer does. And the other benefit of that is that we can expose levers to the way the AI is making the music that mirrors the way a producer makes music.
So the control that they get maps to the way that they think about creating music as well. You know a good analogy for that is if you think about ChatGPT, for example, if you type a bunch of problems and you get a three page response back, you can’t really change that second paragraph to your liking the way you want. Or maybe the third sentence in the fourth paragraph.
It’s kind of opaque when you think about the large monolithic and their own networks, and that doesn’t really work when you’re talking about enabling creators in the music space, So those are kind of the key differences that enables us to avoid the copyright issue, but also really speak to creators in the way that they make music.
Super interesting. Maybe go a little more in depth. How did you train Aimi to generate music? How did you teach it what music is and how music works, and what people want to hear when they hear different types of music?
When we first spoke about a year ago, we were very early in the process and we had essentially a model that could pick loops and combine them at run time and make music. Now the issue with that is, any kind of model typically is going to have this sort of inertia problem where you need feedback to train the model, but the feedback is difficult to get because people aren’t readily going to just sit around and listen to bad music and give you feedback.
So you kind of have this bar that you have to get over.
The music has to get good enough that people will listen to it for pleasure, and in the course of doing so provide you feedback. So we’ve essentially combined expert systems with machine learning, and what the expert systems do is make music similar to way a producer would, but they’re constantly training our machine learning models to replicate what they’re doing, but with the twist of the users preferences or the artists preferences included, so the expert system is good at making music is just not nuance to your tastes or genera or an artist tastes. And that’s where machine learning comes in and adds that sort of twist that makes it more personalized, more genre specific or more artist specific.
Interesting. Are you using expert systems? The way some people use the term deep learning, so it’s kind of like guided machine learning in a sense,
Well, AI is kind of a broad umbrella, and underneath AI, you’ve got machine learning. You’ve got deep learning and you’ve got expert system. So expert systems typically are, they can be algorithmic, and we do use a lot of algorithms.
We’ve in fact invented a programming language called Aimi Script. We didn’t have this when we spoke to you before Aimi Script is a full featured generative music programming language and is a script based language, So you as an artist can sit down and write a very elaborate program using AImi script and make music.
We’ve made it really easy so it’s basically drag and drop. Essentially, we’ve sat down with close to two hundred artists and distill their expert techniques into algorithms, and as a new artist on the platform, you can just grab the algorithm that you want, put them together, shape this multi-dimensional musical space, and then Aimi takes you on a journey by picking the different algorithms and picking the different loops that combine with those algorithms at run time.
Wow, so I know from the beginning you had the idea of working with creators working with artists. You’ve added artists, you mentioned. It’s now about two hundred. What can I expect when I hear something from an artist and from Aimi? Um, is Aimi more than a piano or a guitar, or an instrument? How is the artist and instrument system working together to create music?
How much of a composition is from an artist? How much is from Aimi?
Yea, one thing that we learned in the past year of working with a lot of artists is that there’s no one answer to that question.
In fact, some artists that we met with are thrilled to jump into the code and write Aimi script and be fully prescriptive about the music that’s coming out of the system, and other artists are really keen to just let the system do its thing. And one of the key elements that we’ve added into our platform is the ability for the artist to really steer that they’re in charge of how much control they have or how much control the AI has, and we found that that interplay by giving the artist that kind of control is just one more way to empower them to really be expressive … whether it is expressive by feeding something that kind of they let go, or being expressive by being very prescriptive about what they want to hear.
Can anyone come on board and be an artist and use Aimi and get access to the programming language? Is it invite only? How’s it work?
So we have a product called Aimi Studio that’s being released in a couple of months, and Aimi Studio is the creator platform.
The app that we’ve just released today is more of a consumer platform or a consumer app, I should say, but we’re using it to really make it clear that music doesn’t have to be so intimidating. It should be more accessible and inclusive.
In fact, part of the reason we don’t use the word artist as much is we want to unleash the inner artist and everybody. We want everyone to feel like that. We don’t want it to be uniquely for people who are quote unquote artists, and our interactive music player basically teaches you about how music is made, and gives you the ability to take ownership of your music experience, and we find that’s empowering especially for people who aren’t trained musicians who don’t know music theory or music composition or the nuances of a genre that they want to mix.
For all of that is a very steep learning curve. And then if you add on to that tools that you have to use for production, it’s a significant investment Before you can even make a sound, let alone something worth listening to. So the app gives you the ability to interact with existing experiences that have been created, and then Aimi studio lets you create new experiences, but again you can let the AI guide you in that, or you can take full control over it, so you can be a novice or an expert and still benefit from my studio.
So many questions here when an artist releases the composition? Is it a discreet, unchanging thing like we have in the old world of music? There’s a track … I play it. It’s from an album and I can’t engage with it. I can’t change it. I can’t modify it.
Or is it something that I as a listener can engage with and change and evolve a little bit?
The latter, so we call them experiences for that reason, and one of the really important elements of an experience is that it’s a dynamic thing.
So you as an artist, you are constantly getting feedback about how people are engaging with your music. You know whether they gave a thumbs up on your beats, thumb down on your harmony. You can swallow out the harmony, pop in a new melody. You can constantly change and evolve the music that’s being expressed, but more importantly, just like we give these levers to control the A I. we’ve exposed the same levers to the listener … simpler ones.
But this is what allows the listener to kind of take the steering wheel with Aimi when it goes on this journey through this multi dimensional space that the artists created, which we call an experience.
It’s fascinating if you think about it. Music has always been an engagement, an interaction between somebody who’s creating and somebody who’s listening … if you hear live recordings, the audience becomes part of what is actually the recorded thing. This is the next level on that.
I’m trying to position Aimi in my brain somewhere. I like to put things in slots. occasionally. Some, you know, maybe that’s just me. Maybe it’s human. I’m thinking like there’s GarageBand and I’ve used GarageBand, Don’t consider myself a musician, couldn’t play a piano or guitar or a flute or something like that.
But I’ve used GarageBand to create some stuff, some for my podcast. Other things that you know that, I think, hey … that’s kind of cool. That’s kind of nice.
This is different. How do you characterize its difference?
Well, the number one thing is that we’ve we’ve built the whole ecosystem and we had to, Because, if you’re an artist, and you are going to use Aimi’s studio to make generative music, we need to show you that there’s an audience the people will actually listen to consuming.
That’s one of the reasons we built this app. when we had first talked to you. The app was very early and we had built it primarily for artists to be able to listen to the music that they were making on the platform.
The crazy thing is, we had close to a hundred thousand downloads and a four point eight rating App store with that, so it kind of showed us that people want music to fill time and space, And that’s something that we do at work at something that we do when we’re driving at something we do when we have a dinner party. Generative music is great for that, so part of our goal is to show that there’s an audience for this music and then encourage more creators to come on the platform to create music for this audience.
But at the same time we’re engaging that audience and showing them that music is not inaccessible. It does not need to be intimidating, and we want them to go and download Aimi Studio and become creators as well.
Amazing, Amazing, what’s the monetization here? Aimi itself, I believe, has had a subscription model. What about Aimi Studio? How do you make money? And how do the artists who create music with Aimi make money?
So this has been another evolution in our thinking.
So very early on we thought about making a premium version of the app and we tinkered around with that and we realized very quickly that we don’t want to charge for music. We don’t want to make money off the music that comes out of Aimi. Instead, we want to charge for what you can do to the music.
So it’s really about the creativity that is what our unique offering is. There’s plenty of music out there, and if we’re charging you for music we’re competing with a whole industry we don’t want to compete with. We’d much rather be a unique creator platform.
So the way that we’ve positioned the app: now you can download it, you can listen to all the experiences in the app. There’s ten of them that we’ve released today, and within a couple months, less than a couple of months, there’ll be another hundred from artists that you can listen to as well, and these are top tier artists around the world that have been using Aimi Studio for the past year.
Aimi Studio is a subscription product, so as an artist you subscribe to it monthly, and you can use it for your existing workflow, so you can drag your stems, your loops, your musical ideas … dump them in the studio.
Our machine learning will crawl all over those musical ideas and will understand those little bits and pieces of music at a level that very sophisticated artists know intuitively, but we’re doing it at a mathematical level. You can then have a studio export multi track audio for you that you can go and put on your favorite DJs and do whatever you want with.
You can also publish it to our app where you engage your most enthusiastic fans who want to interact with your music and you get a ton of feedback in return, and then last, you can also syndicate your music to YouTube.
So we just I think a week and a half ago released ten live channels on YouTube. And the unique thing about these channels is they are live. It’s Aimi on the web, making music that’s playing on a live Youtube channel. Also, you can imagine as an artist you have a pile of stuff laying around on your laptop, and …
It’s my station …
Yeah, exactly, it’s your station exactly, but you don’t have to curate songs. you don’t have to sit there and make tracks. You just take all this stuff that’s lying around your laptop. For every hour of music artists that we’ve spoken to release. You have another hundred hours of unfinished stuff on their laptop. That’s gold to Aimi and it’s gold because it’s the artist’s words. They just haven’t taken that time to make a story out of it.
Aimi is a perfect story teller and it can do it in your voice. So the YouTube channel is essentially a live station that you can indicate in a few clicks.
There’s a lot to wrap your head around here, and there’s a lot in the general world of generative, but this is very different. This is one hand, it’s a tool. On the other hand, it’s a prosthesis. On the other hand is a platform. On the other hand, it’s a creator. On the other hand, it’s something that I’m using to create.
We’re used to this world where an artist, a creator makes something that has a discreet length. It’s a three minute song. It’s a long, ten minute song or something like that. You create an experience on Aimi, does it have an end?
No, it does have a beginning …
It would need to 🙂
So you have a beginning and the idea is to really evolve the music, much like if you went and watched a live performance, you brought up a really good point earlier.
Like the heart and soul of music was engaging with a musician who is playing live, the musician and the audience became one and that symbiosis is something that was essentially stripped from music once we started trying to make money off of it, and the way we made money off of it was we recorded a three minute song, put in a thirty second ad, then another three minutes song.
And that’s how we’ve been sort of shackled by the confines of a song for so long. Part of what generative music can open up is this opportunity for the artist to really be freeform. and because you can expose these controls to the listener, each listener can essentially have that interaction with the artist in a very intimate way. This was possible before when they performed live.
It’s kind of mind blowing. It’s pretty amazing and the YouTube angle makes me think some of the most popular things on YouTube are white noise. Some of those popular things on YouTube are nature sounds for four hours or six hours, something that somebody goes to sleep with.
But music that is more traditional music is also massive and huge on YouTube. So much here to think about, so much here to imagine, I can’t wait to see your Aimi studio … to try it and see what I can create.
See what your programming language looks like. All that stuff. Anything else to add? Anything that we’re missing?
No, I thought that was. it was really comprehensive and we’re launching the interactive music players available today in the App store. So we released earlier this morning, it’s on Android. and we’re releasing Windows and Mac versions as well in a few weeks. And then, of course you’ve got the Youtube channel. So if you don’t want to interact with music and you just want to go listen, you can do so on YouTube today.
Excellent Edward. Thank you so much for your time.
My pleasure. Thank you for having me.
TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech
Made it all the way down here? Wow!
The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.
Subscribe to my YouTube channel, and connect on your podcast platform of choice: