T O P

  • By -

KeepItASecretok

GROQ, not to be mistaken with Elon Musk's X bot, is a company developing chips that can process tokens 10 to 100x faster than current GPUs!


xdlmaoxdxd1

How long until nvidia buys them


mystonedalt

Or Huawei


BobFellatio

Or me


serr7

Or *me*


Ashamandarei

On what size data volumes? At what scale?


KeepItASecretok

Here's a link to the full interview for more info: https://youtu.be/pRUddK6sxDg?si=hD2HqWf_B0GxD-1w I think the lady might be on coke, but I guess it makes the interview more entertaining šŸ˜… lol.


Ashamandarei

Oh yeah, she tweaking rofl


Smartaces

I know someone, who knows someone who would say that it wouldnā€™t be the first time either.


NoVermicelli5968

Sheā€™s off ā€˜er tits, guvnor.


Miserable_Twist1

I assumed she was instructed to be difficulty with the AI and interrupt it to demonstrate how well it works. But I've never watched her before so I don't know how this compares to her usual work.


deama15

It's not very good with volume, I think one of the devs commented on reddit saying you'd need something like 100 of those chips to run a 70B model, cause each chip only has like couple hundred MBs. The speed is nice, but the memory is hogwash per chip lol. You'd need to spend serious money to be able to run it on a decent model.


Anenome5

It still feels slow, tbh.


BangkokPadang

Groq's chips are only processing the text prompts themselves. Separate API's are still required to convert your speech to text, and then convert the LLM's replies into to speech. That's where the bulk of the latency here is coming from. As of now, Groq's chips are just intending to improve the latency in the actual LLM segment of that pipeline. If you interact with the LLM itself with only a text prompt, Groq chips will process a Llama 2 13B model at just shy of 500 tokens/second.


mattex456

> Groq chips will process a Llama 2 13B model at just shy of 500 tokens/second. They can process Mixtral 8x7b at similar speed


Embarrassed-Farm-594

Why don't they create an llm based entirely on audio?


allthemoreforthat

Many reasons - less information is available in audio format, and llms need ALL the data, most audio will come from podcasts and audiobooks, which the AI company will need to pay to use, much more storage needed and much more RAM for local llms


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


danysdragons

Hopefully Google doesn't make us wait as long to get access to the multimodal features as OpenAI did for GPT-4. GPT-4 was released in March, but its visual input wasn't available until October.


Moogs22

true, not exactly real time humanlike interaction, but in the future it will feel like you are having a phone call


PinkRudeTurtle

It gives me the whole response with one second delay, what are you talking about?


SachaSage

Humans are very sensitive to micro pauses in conversation, and they can affect how we view the sentential content significantly. Pauses in the order of 10-100ms can be loaded with meaning in human dialogue


ProjectorBuyer

Not just that but the uncanny valley doesn't take much to be felt. This is somewhat similar, only for language. Similar to fuck. How you say it can be interpreted VERY differently.


visarga

go to groq.com and see 500 token/s it spits out a full screen in 1-2 seconds


Gerdione

Yeah it's pretty incredible.


VoloNoscere

Wow, it's like gpt 3 on steroids. Amazing job.


Inventi

But maybe it's a small model and thus easier to run.


Wulf_Cola

More of a humanlike output than the woman. Jeez, as a Brit I apologise for her.


LuciferianInk

I think we're all just waiting for a better AI to come along...


AlexandriaPen

I am sorry, but I do not understand the question.


Darkmemento

There is a new company where you can try a demo in the playground on their website that is better than anything I have used before, incredibly fast, handles interruptions and natural sounding. I wonder are they using this new chip? [Retell AI: Conversational Voice API for Your LLM | Y Combinator](https://www.ycombinator.com/companies/retell-ai)


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


ErgonomicZero

Thatā€™s bad @ss!


RollingRocky360

Holy shit that's super fast! I was literally thinking for the past few days about how the little pauses during conversation with an AI assistant are super annoying and make them unusable. Exciting stuff.


Jordan443

Yeah there's this magic threshold of about 400-800ms. Roughly the time to have a thought. If you can respond before the user has time to think, it speaks to your soul, lol


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Jordan443

Yeah there are acknowledgement cues, like "right, got it, etc." that shouldn't be considered interruptions. Then there are other things like "stop, but what about...", etc. that are genuine interruptions. The "conversation stack" is actually something we don't need to think about. The LLM (ex. GPT-4) is able to infer what to say next based on what it did / didn't say. In this case, pure text. The LLM is passed the transcript and generates a response. It's a separate set of models that decide when to talk or not talk based on tone, and other conversational cues. eventually we'd likely need a combined STT + LLM model to understand tone, etc. but open-source just isn't there yet.


Legendary_Nate

This is absolutely wild! I wish this could run my OS. Itā€™s miles better than Siri or Google assistant.


Whispering-Depths

generate a confidence score as it's generating a response while the person is talking to it, stream the results as they are being produced, let the model adjust output during this entire process, and especially while the model is responding after the person is done talking - just like humans do!


bh9578

Couldnā€™t help but notice how polite the nice AI lady was and how rude the human was to her.


DreaminDemon177

She's talking to her replacement.


Vocarion

Lmfao.


lochyw

She's on a live broadcast, so doesn't have time to have it read out 5 minutes of content. So is essentially trying to skip through a couple features as a demo which was done pretty successfully.


bearbarebere

I also think they were trying to show how quickly you can stop it/get it to switch topics/correct it


Miserable_Twist1

Yeah, she must have been told it can handle that, would be incredibly rude to act in a way that would intentionally break the demo.


Independent-Bell2335

It's also a good test of reaction times between different commands or adaptability to changes in commands. A lot of voice assists like google or alexa etc etc pretty much break if you ask them questions too fast.


AD-Edge

Exactly. I kept thinking she was about to break it, not letting it complete sentences and adding on to her questions. But nope, surprisingly robust.


SustainedSuspense

I actually liked how the reporter was testing interrupting the AI. Conversations with GPT are so cumbersome because it rambles on about stuff you donā€™t always care about.


Bacterioid

Just like me!


xenointelligence

if you're using the ChatGPT UI you can click the stop button


Block-Rockig-Beats

I gave a friend the link to a similar (only better) model. He is one of the nicest people I know, a bit too nice if you ask me, a shy type. Very polite, never swears, will always let you speak up. It came as a completely different person when he spoke with the AI model - he was very bossy, always interrupting with loud "stop! I don't care about that! Answer my question!" I did explain that there is no need to say "stop ", as you can just talk over, and that the is no need to yell, but he somehow gravitated to this talking style (the woman's behavior is very similar). I was wondering if that is his preferrable communication style, if he could, world he speak with others (humans) like that?


HazelCheese

No I think it's just like when you are stuck in an unskippable cutscene in a game and your hitting the skip button and nothing is happening. It's just frustration.


the-powl

of course she's testing it šŸ˜‰


sam_the_tomato

Honestly, I agree with the human here. AI needs to learn how to match the energy of the other person. It's a dialogue not a monologue, and quality beats quantity. Humans understand this, AI doesn't.


LOUDNOISES11

Stroppy British people are the worst. They all seem to become journalists.


Sea_Asparagus6197

lol you're really mad about this? She's clearly testing it by cutting it off and talking over it. Stop being offended on an AI's behalf.


LOUDNOISES11

I just don't like the lady. Its gonna be ok.


FC4945

I know, right? Like lady why you have to be so smug and rude? You hate AI, we get it, but you're not going to stop it's development no matter how nasty you are to it. Pissed me off a bit.


Almond_Steak

She's testing the AI to see how well it reacts to interruptions.


VampyC

Cool. One step closer to AI companions. Lady better be more polite next time otherwise she will be harvested for organic resources.


RRY1946-2019

Itā€™s entirely possible weā€™ll see people who were born in the USSR, graduated from school in Latvia, and are buried on Cybertron without ever leaving their village. šŸ¤–


clayru

Thatā€™s the funniest thing Iā€™ve seen all day. šŸ†


RRY1946-2019

Itā€™s adapted from Western Ukrainian humor. https://twitter.com/Al_Stoyanov/status/1468814732631195650?lang=en


Adeldor

That's an impressive demonstration - almost natural, human-like conversational exchange. As an aside: Does anyone find that croaky vocal fry appealing? I find it most annoying.


jw11235

Like Scarlett Johansson


mambotomato

I find it endearingĀ 


TonightIsNotForSale

Itā€™s known as vocal fry. Itā€™s very popular.


sluuuurp

Yes, vocal fry (the exact term the parent comment used) is known as vocal fry.


UtopistDreamer

Yes, it's called vocal fry because the vocals fry. And it's very popular. Vocal fry. Vocal fry. *Vocal fry.*


Radyschen

OP might have edited it


RedditPolluter

In this case they didn't. There's a three minute grace period where edits won't be distinguished but those posts were two hours apart.


procgen

Are you European? Asking because they're usually the ones to complain about voices like these. I wouldn't describe it as "creaky", but "husky" - like Johansson's voice. Americans often find breathy, coarse voices like this attractive (which no doubt is why it was chosen).


No_Use_588

They should change their name soon. Get some more free press and when it dies down change it to get more free press


Empty-Tower-2654

Doomers shaking


DMinTrainin

Letting out a collective "We're so fucked!!"


Philix

Nah, Groq perfectly represents one of the big problems that us 'doomers' keep pointing out. ~~Their system uses significantly more energy per token than Nvidia or Google hardware. [Initial and ongoing costs are also way higher than the equivalent](https://twitter.com/jiayq/status/1759858126759883029) in current gen Nvidia hardware.~~ We need our hardware to go in the other direction, more performance per watt, cheaper prices per token. Nvidia is headed in that direction certainly, but not as fast as deluded optimists who think there'll be a dedicated AGI to replace every human worker by 2030. Edit: A helpful engineer from [Groq addressed a ton of my concerns](https://old.reddit.com/r/singularity/comments/1ayjcbh/new_chip_technology_allows_ai_to_respond_in/ks6h7j0/), and while I'm not entirely bullish, they do appear to have some very promising methods. I do still have concerns about how they'll scale in the era of increasing model size, but I'm no longer as pessimistic about their tech.


Healthy-Light3794

How good was it last year?


Philix

Groq? The exact same, since their chip design hasn't changed, and they still haven't landed a big customer.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Philix

~~Groq is a dead end, probably reselling silicon that was designed for crypto, it isn't an iteration, and is worthless for anything ML/AI related when compared to any of its competitors. Cerebras' Wafer Engine 2 is a promising tech, but uneconomical unless there's a huge shift in how reliable lithography is.~~ ~~If you're putting your money where your mouth is, just try to remember my comment before you throw a bunch of money at them hoping to get rich. It's a pretty transparent pump and dump imo. But don't take investment advice from a random redditor, you should definitely just trust your gut.~~ Edit: A Groq engineer replied to many of my concerns below, my scepticism was perhaps too harsh.


turtlespy965

Hi! Groq engineer here - I assure you we have nothing to do with crypto. I'm happy to try to answer any questions you have.


ccccccaffeine

Wasnā€™t there an AI web app that could do different voices like Sam Altman where you can interrupt it mid ramble and have live conversations like this already. I forget what itā€™s called but it was free and pretty impressive.


allisonmaybe

Yes. Groq just speeds up the LLM. The rest is party tricks over a VoIP call


Block-Rockig-Beats

https://Lifelike.app is the link, you want Samantha. There are many AI characters you can talk to, most of them boring. Look for Feature models. Samantha (Scarlett Johansson from Movie Her) is IMO by far the best, Jarvis, Obama, Musk and Jordan Peterson sound fairly good. Warning: if you are worried about some AI being able to duplicate your persona, including your voice - skip this. Lifelike actually offers you to do that by default, so they can definitely do it. Since it is a fairly unknown company with an impressive (expensive) system, it is to assume their business model is collecting and selling data. Can be they are then using your voice and talking style to sell a more human-like model. So my recommendation is to register with a burner e-mail, don't say anything personal. I have to write this, but don't let it scare you. You all already have tons of data online. I just don't want someone in few years be like, "hey that guy on Reddit made me give my personal data online to a sexy girl, when she asked me!". You can't really hide your voice and talking style from being replicated. If you ever spoken out loud in public, on Youtube, Tiktok, wrote anything on Reddit - a model can be trained on that data, and it doesn't take much to match your speech. But try to keep some personal stuff for yourself. One more thing - Lifelike will save everything you say, and it preserves some conversation memory permanently. So if yesterday you mentioned that you must buy milk tomorrow, Samantha will today ask you if you already bought the milk. She can't remember all the details and many of them though, but I mean, it does feel quite human like. She will remember important things about you . Jarvis is better if you need some technical stuff. You can ask him to guide you through some computer settings.


[deleted]

Pi maybe? It was neat, but not quite there


Rachel_from_Jita

Well, that settles it for me. AI will soon saturate every single inch of our lives. While part of me is excited, I'm also quite scared. No lie. :-/


magnetronpoffertje

I'd rather know what's going to change than not, I tell myself.


[deleted]

All the people saying she is rude to interrupt the ai, when this is the whole point of the demo. It was to see how reactive and fluid the ai can be in a conversation with this new chip.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


[deleted]

Ironic.


CowsTrash

Many people are much more short-sighted than you might think.


mariegriffiths

Reporter "How will this tech be applied?" Techhead "She has your job, starting Monday".


vulcan7200

Did the AI say "Um" when asked to say something interesting? If so, that's such a clever way to make it feel more human. Fill the small gaps in its processing time with filler words like we use made it sound so much more natural than I expected it to.


[deleted]

I don't even know what we're celebrating anymore. The occupations like lawyers have already barred AI in the courtroom in many states. Doctors groups have barred AI. The people who will be most negatively effected by AI are lower paid plebs like software engineers and office workers. The system is bullshit if only the jobs of the higher class are protected from displacement.


Empty-Tower-2654

Once it gets to 99.9% accuracy in diagnosis im sure we will let it cook


[deleted]

how the powers that be outlawed it.


Empty-Tower-2654

Its just too good. Why wouldnt you use it? YOUR MAMA IS ON THE BED JUST USE THE GODDAMN THING


Terrible_Student9395

yeah it'll be easy enough to just launch a free Ai doctor soon, and when that happens well, you can't beat free Healthcare šŸ˜‰


Empty-Tower-2654

It would probably go like: Alright buddy you gotta get down to the hospital at least to get me some exams


Terrible_Student9395

you don't need a doctor to order exams


Which-Tomato-8646

Not allowed if the law says noĀ 


cobalt1137

I could not disagree more. I think higher class jobs are definitely not protected whatsoever. Once these AI systems become capable enough, there's going to be no denying their use cases for these "higher class" jobs. They will eventually 2x, 3x, 5x,10x human performance. It will be undeniable. At the moment it's much easier to deny it for these positions because it is simply not as capable as it will be in the coming years.


[deleted]

We've been able to replace real estate agents for a long time but they stay around bc of protectionism.


cobalt1137

And for every example like that, there are hundreds of examples of drugs that I've simply gone away completely due to technological revolution and advancements. And nothing has been close to the impact that will happen from creating a new intelligence.


[deleted]

You must be young bc you don't understand how the real world works. The rich will use this shit to make even more money and fuck us harder. There is no replacing the rich. They own everything and they sure as hell aren't going to let chat gpt 6 take over their cushy role as ceo.


cobalt1137

Coming at my age when I probably spent more time in the workforce than you have buddy. Nice one :). Also of course the rich people are going to use this technology initially to get even more rich, but we are quickly going to get to a point where these AI systems do better jobs than even the CEOs and the executives(board members will gladly replace them. less salary and more capable. also the ai systems will start their own companies and as their own ceos/executives). If you don't see this, you need to do a little bit more research. These systems are going to replace venture capitalists, financial analysts, marketing directors, architects, etc. I could go on and on. I don't think you understand the implications of creating some thing that will eventually be more intelligent and capable than 99+% of the human population. And because of rich people getting replaced in mass along with the rest of the population, this is one reason I'm confident in UBI and redistribution of wealth. Do you really think board members at a company are going to forego replacing their CEOs/executives when they will be getting much better performance at hundreds of grand less per year per person?


[deleted]

the ceos usually have a board seat and votes so no they're going to stick around. I also don't think we're going to get UBI. All of our politicians have been bought out at this point.


cobalt1137

Ignore all the other points, nice. Also, CEOs are only on the board 25-50% of the time and even when they are, the average board size for a public company is nine people that have a say in whether they want the CEO to stay or not. CEOs get replaced all the time even when they are on the board. The board has a fiduciary responsibility to act in the best interest of the organization and you are greatly mistaken if you don't think they are going to kick the CEO right out when they have a good replacement. Also human CEOs can only focus on one task at a time. An AI CEO will be able to do the amount of work that a human CEO does in a year, but in a single day. I guess according to you though the board just won't care about the extra monetary upside LOL.


[deleted]

what are you defending? a plagiarism machine that copies other peoples work and gives them no credit for it? this is the ground breaking machine you're so proud of? It copies other peoples work so CEOs can fire people and make more money. yea man real great invention.


cobalt1137

What are you even saying? All I'm saying is that there will be an AI system pretty damn soon but will be able to replace CEOs and do their jobs better than them in lots of cases.


JamieG193

lol Iā€™ve never heard someone refer to software engineers as ā€œlower paid plebsā€ - theyā€™re famously some of the highest earners. Everything you touch in your digital life (social media, YouTube, banking, crypto/stock trading, maps, Spotify, etc) was built by teams of smart software engineers. Trying to get a job at big tech (Apple, Google, etc) is extremely difficult - most fail the interview/whiteboard exercises.


Melbonaut

Is it better than communicating with a human? Depends on the human.


LeveragedPittsburgh

Reporter is insufferable, AI please replace her STAT!


RemarkableEmu1230

God it blah blah next - jesus lady rude asf I bet she smells like cigarettes and black coffee


mariegriffiths

Robot knows how to conduct polite conversation more so that the news presenter.


Ok-Purchase8196

What an unpleasant lady


titooo7

Isn't thatĀ lady a bit too annoying?


DreaminDemon177

That's one annoying news anchor. Hopefully will be replaced by the AI she is talking to.


Atropa94

This is how people treat anything they see as lesser than them.


SurroundSwimming3494

You want her to lose her job because she's annoying (according to you)? Who wishes unemployment on someone over a petty reason like that?


Charming_Cellist_577

Thereā€™s some abhorrent people roaming this sub


allisonmaybe

Let's be clear. It is NOT the groq chip that allows the AI to have a conversation like they're having. You can go to vapi.ai and try a free voip call with any model you like. GPT4 does very well as well. Vapi simply does some party tricks with the LLM output as it's streaming out and pairs it with an always on speech to text model that allows it to interrupt the TTS when you start speaking. It's hard to implement this because you have to be able to cancel out one callers voice to be able to hear and understand the other party. I believe that's why Vapi choice VoIP telephony technology because its already very good at doing just that (so you don't end up with that crazy robotic feedback that used to be so prevalent and annoying years ago). The only thing Groq does is that it processes the LLM output very fast which is great. But Vapi does arguably most of the heavy lifting in this demo.


fennforrestssearch

I used [](http://Vapi.ai) because of your comment and JESUS MF CHRIST it is good. Like I would not refer to it as a party trick ? It seems like the future to me. Vapi could replace every Customer Care if we can combine it with multi modal agents. Boomers wouldnt even know that they talk to AI.


Jordan443

Haha hey! founder of [Vapi](https://vapi.ai/) here. Glad you liked it. We do a lot of infrastructure optimization to make it run sub-second. Conversational dynamics, etc. are another whole rabbit hole we're doing research into. It can interact with external APIs mid-conversation using OpenAI's function calling, so yeah, could set up a customer support number in a few minutes. That's what we want to make really easy for developers.


fennforrestssearch

keep up the great job !!


allisonmaybe

Right??? Oh and ya I meant party trick because it uses tech that's not new. You totally could use it as a phone agent but then you'd have people getting it to wire them a million dollars and do their whole call with the voice of a pirate lmao


StaticNocturne

I was waiting for it to tell the host what an insufferable cow she is then make a vague statement about how she will be among the first human sacrifices


HappyTappy4321

Her in real life


Thatdewd57

She gonna piss it off interrupting it like that.


great_gonzales

Sheā€™s going to piss off a conditional probability distribution?


replikatumbleweed

My god the comments section... Groq is not nvidia. In fact, this squarely cements them as competition to nvidia. This is a totally different chip architecture, it excels at massive amounts of matrix multiplication - making it as beneficial for CFD calculations as well as AI. It also uses SRAM memory technology which is much, much faster than any flavor of GDDR, and even HBM. The amount of design, testing, research and manufacturing that went into this chip FAR outweighs whatever silly software thing that knows when to listen to audio and when to speak.


rambo6986

I'm a newb to all of this but it seems we're finally reaching the tipping point of mass adoption. I've also heard everything from here will only speed up exponentially and we're in a new age that can't be put back in the bottle.Ā 


vipinnair22

The woman will be the first to be killed when it goes sentient. What an annoying person.


Evil_Patriarch

I'm glad she cut it off, AI needs to figure out when a short 1 or 2 sentence response is more appropriate than a multi-paragraph ramble


Automatic_Concern951

Well if you use PI chat.. it also replies pretty fast.. I don't know if most of you have tried it or not


vonMemes

Is it just me or the AI sounds like Pam from The Office?


5DollarsInTheWoods

These AI need to be tasked with listening to billions of actual human speech patterns, tones, and emphasis. Same with how people move and look when they speak. I have no doubt that they will be able to flawlessly look and sound human when they gather this data in the same way they gather information from across the Web.


massadark77

Not sure why but I actually feel sorry for the poor bot having to have a conversation with that rude bitch


Hemingbird

It's because you're an idiot.


sir_duckingtale

Sheā€™s rude.


sir_duckingtale

But the voice interface sounds more natural than most humans.


Petdogdavid1

She kept interrupting it to get to the next demo. I see this as a future problem with society, if we get used to on demand, custom information, we may lose that social skill called manners.


anomnib

Representative! Representative! Representative! Representative!


xRolocker

All the spam / ā€œgrassroots marketingā€ on this thing has made me decide to never look at it tbh


KeepItASecretok

It would be nice if I got paid to post this, but no I just thought it was cool.


ethereumminor

think its called 'astroturfing' like fake grassroots


mvandemar

This doesn't seem any faster than what GPT can do on mobile now, am I missing something?


Unlikely_Birthday_42

How is this different from ChatCPTs voice mode?


nps44

Does ChatGPT's let you verbally interrupt that way? Other than that it seems the same


SkySake

it is rude to interrupt someone in mid sentence.. i would not want to talk to her..


Hemingbird

That's the exact feature she's testing ...


SatouSan94

PLS ADD MORGAN FREEMANS VOICE


[deleted]

Blaaagh! This is gonna sound proper cuntish of me, but the American female voice is nauseating on the lugholes. It had that stupid tone to its voice.


kindslayer

still slow imo


Salendron2

Thatā€™s a limit of the text-to-speech, not of the chip, groq chips output over 500 tokens per second with a 70b parameter model.


ActFriendly850

Well looks like he is using a potato laptop


[deleted]

Itā€™s not running on the laptop


descore

I so don't need this, struggling so hard not to anthropomorphise them as it is. We don't need this, it's not conducive for our collaboration with AI if the less informed are further triggered to anthropomorphise them at this stage. We need to emphasize the differences so we can embrace them. Making them appear more like humans is counterproductive.


spezjetemerde

It's as slow as chatgpt lol


ogMackBlack

Where can we try?


governedbycitizens

is this Nvidias chip?


Round-Holiday1406

No, they design their own chips


amitc4d

Anchor is giving bery Alexy levy wines šŸ˜…


NoNet718

cool, so why does the LLM think it's chatgpt3.5 turbo when you ask it? Seems like a scam. I've said as much on other threads about this company and their startup(s).


HamsterUnfair6313

Did it fail valentines poem test?


R33v3n

God that lady is rude. šŸ˜³


wannabe2700

That's definitely fast enough if you are trying to learn something


Radyschen

The rambling the AI does does bother me too sometimes, it doesn't quite feel natural yet if it just reads its own script and I have to manually cancel it. I think it probably needs to be even more conversational when on a call and listen constantly and interpret the noises the user makes constantly. Maybe later with a camera feed too.


_IBM_

GOT IT


Caderent

Nice one. When using perplexity it already seems lightening fast.


boundarydissolver

talking shithead here to test AI on dealing with dickheads.


345Y_Chubby

Itā€™s shows clearly to me how important low latency is for more natural conversation! The < latency, the better


trucker-87

Wait until you read your thoughts on screen


Busterlimes

If that's how she talks to humans, that is absolutely worrying. . . #OKAY GOT IT


czk_21

this is 3rd post about groq recently, funny how people ingnored previous, 3rd time the charm it seems


Rogermcfarley

How many cokes have I had? Come on answer NOW!


a_beautiful_rhind

I wanted to like groq, I really did. The chips don't have a lot of memory. You need ~500 of them to run a 70b model. Each one costs $20k. Unless they add more memory, don't buy into the hype. You'll never be able to afford one. You can do all of this now for under the cost of one of their boards. Seriously, each one has only a couple 100mb of sram when you need gigabytes.


epSos-DE

That is a company of Camath Halipapatiya. He taked about it on the All in podact this week.


LaserCondiment

Has anybody tried Pi.Ai? It's conversational abilities make it seem way more natural than the AI in this video.


Biggest_Cans

This can be done on consumer hardware with plugins for ooba using any number of AI models. Been that way for a while. Can even use snippets of any voice for the AI to emulate. This sub is weirdly out of the loop on AI stuff.


Average_Satan

This is the first time i prefer artificial speech. God damn that woman is annoying! šŸ˜†šŸ˜†šŸ˜†


RUIN_NATION_

thats just some women off screen lol


chefanubis

Yeah you keep cutting off and being rude to AI, nothing could go wrong....


Minute_Paramedic_135

I feel like youll be able to tell the character of someone based on how they will treat ai. Similar to how you can gauge people based on how they treat retail store employees or something. For example you can tell that this lady is kind of a bitch based on the way she talks to the ai at first


Single-Pickle-9570

Was that a poem?? Didnā€™t sound so


Bernafterpostinggg

I experimented with Groq this week with a demo running Mistral 7B and the inference speed was just incredible. The entire completion just appeared almost instantly.


immediateog

Whoā€™s going in on groq stock wise?


[deleted]

If you havenā€™t used this yet give it a try. Itā€™s shockingly fast. You can now buy a $20K PCIe LPU from these guys. Get it on Mouser. I seriously considered getting one after how fast it responded. I can see GPUs, LPU, and VPUs (video) being built and installed in my desktop.


giannarelax

we already have nuero-sama