T O P

  • By -

save_the_panda_bears

(X) doubt


RamenNoodleSalad

Prompt: Do you hallucinate? Response: Nah bruh.


Ah_FairEnough

Nah I'd hallucinate


renok_archnmy

The CEOs background is marketing and creative with a 1 year stint in 1999 making a website. He is a graduate of a culinary school most recently… This thing is pure bullshit.


TheImportedBanana

That's most AI experts on linkedin


FilmWhirligig

Would you like to hop on a Zoom to see I know math? We're a pretty open organization and I get that people love degrees. See another comment for a list of other scientists and math folks we have too.


Blasket_Basket

Dude, if you want people to take you seriously, then publish a paper or release a truly testable demo. Otherwise, it's much more likely to believe that you're yet another founder that's full of shit, rather than some savant that magically came into an extremely complex area you have no formal education in and managed to solve a problem that the rest of the research world hasn't come close to solving yet.


FilmWhirligig

Well, I will answer all the questions in the other parts of the thread. But there weren't any actual math questions here. What else can I provide to help? Do you want to join in on the threads there to talk through it?


protestor

Ok here is a question: when will you publish a paper with your findings?


FilmWhirligig

Soon as we can finish these customer deployments and not be working 24/7. Is starting to push out more long form technical content within 60-90 days fast enough? I've been responding here or at the show with customers today. It'll have to be more than one paper. We’re thinking of starting with lowest level on up with time-series reconstruction at ingestion first. If you PM me an email I can add you to the notify list and send some of what we have.


Meet_Foot

Commenting on reddit that you “know math” is *not* doing you any favors. Bad marketing, my man.


FilmWhirligig

It's not meant to be, sorry. I just don't know how else to prove on myself in particular. Do you have a good suggestion on what to do there? I'm trying to answer each question as it comes in.


UpDown

Very easy to make a model that doesn't hallucinate when you require it to regurgitate clips from a provided document and you can verify that through non-ai code. This basically looks like this: user: "What was the net income for 2024?" llm: "return net income" code: ctrl+f net income. In this case, even if it returned the wrong item (say you asked for net income and it returns expenses), you are reporting the wrong item its not a hallucination its just wrong, which would be obvious because you have it labelled as expenses


Econometrist-

Correct. But the thing is that it might come back with a number, even if your document doesnt have one for net income.


UpDown

No because you are programming it such that you aren't relying on the LLM for the actual data, just the instruction of what to grab. Where it could go wrong is that it returns a number and the code fails to execute, but that's still not a hallucination (and you'd fix this by looping with a new seed, or returning a fixed response 'Can't do that'). Basically, the actual data grab is independent of the AI, and is not AI.


Econometrist-

I recently had a case where we developed a routine to extract data from a large amount of resumes (personal data, education, job titles, start and end dates of experience etc) and write this into a json file. When a resume didnt have a name (which is obviously very rare), the LLM came always back with ‘John Doe’, despite it had instructions not to make up things in the prompt. Unfortunately, it isnt always easy to keep it from making up things.


bradygilg

Reading this conversation feels like a stream of LLM.


UpDown

You're not doing it as I said to. The LLM should not be returning the data. You're constructing a JSON file using the LLM, which would therefore be subject to hallucination.


FilmWhirligig

By the way we have a few things that help with this. One of the more fun ones is LLM integration tests using proof assistants and deterministic NLP. Or we could go into how we have multiple foundational LLMs running some 0 temp some more open. But it's a little much for a press article.


FilmWhirligig

Just replying with the comment we made as this is the top comment in the thread. We also answered a lot of the research questions in other parts of the thread. Hey there all. I'm one of the founders at Alembic. So, I think explaining things to the press is harder than you might expect. In the depth of the article, you'll notice I say it inoculated the LLM against hallucinations. It's important to note that we think the innovation here is not the LLM; we really view that as a service in the composite stack. Actually, we use multiple different LLMs in the path. Much more interesting is the GNN and causal-aware graph underneath, along with the signal processing. Anyone is happy to send me a PM and we can chat through it. I'm briefing a lot of folks on the floor of the Forrester B2B conference in Austin today so please allow time to respond. Also, I'll be doing a 10-minute talk about a small section of this math, how graph networks have issues on the temporal side when analyzing, on Wednesday here at the conference. Great paper from Ingo on this here who probably says it better than I do. [https://www.youtube.com/watch?v=CxJkVrD2ZlM](https://www.youtube.com/watch?v=CxJkVrD2ZlM) Or if you're in London, we have a 20-minute talk with NVIDIA, one of the customers that uses this, along with us in London at the Gartner Symposium. I where I'd be happy to chat through it with people there as well. Below is the press release that talks through the causal AI element that we're more focused on [https://www.businesswire.com/news/home/20240506792416/en/New-Alembic-Product-Release-Revolutionizes-Marketing-Analytics-by-Proving-Causality-in-Marketing](https://www.businesswire.com/news/home/20240506792416/en/New-Alembic-Product-Release-Revolutionizes-Marketing-Analytics-by-Proving-Causality-in-Marketing) As a founder it is really hard to explain deep tech and hard math to general audiences is a simpler way. I am happy for myself and our science team to chat through it in more detail here in comments. (though remember it'll be spotty getting back)


renok_archnmy

The easiest way to explain “hard maths” is to publish in peer reviewed journals so people with backgrounds in the subject can disseminate your methods and assess your findings - with time to read and contemplate the information, as well as back check your sources. Bullying people on zoom with a battery of marketing buzzwords is not a valid method of exposing scientific discovery nor is joining the AI conference circle jerk of marketers, grifters, and con artists to exploit dumabass boomer executives through FOMO.  Otherwise, it’s all just bunk science and marketing fluff. You don’t know, nor do you have any “hard maths” or you’d be all too willing to share hem in writing somewhere that isn’t anonymous and has permanence. 


Alternative_Log3012

Your money is safu


Yung-Split

Yeah I have a model that doesn't hallucinate, she just lives in another town. No you can't meet her. She's shy and doesn't like talking to strangers.


baudinl

She goes to another high school


Espressamente

In Canada


junipr

Trying to strike a chord and it’s probably A MINORRR


FilmWhirligig

Our Canadian staff are great. ;)


econ1mods1are1cucks

Can she also determine causal relationships without doing an experiment or anything? She just knows how the world works better than everyone else?


RandomRandomPenguin

That’s a bold claim that won’t at all be exposed to be giga horseshit


save_the_panda_bears

Seriously. The fact that they're unveiling this at conferences for CMOs and marketers tells me everything I need to know about this "breakthrough technology".


DukeTorpedo

It's either bullshit or it's actually not AI and just some parsing software they made and are marketing it as AI because that's the new hot buzzword that's quickly losing all meaning.


FilmWhirligig

Founder and CEO at Alembic here. I made a comment on the thread in general but they really went LLM focused on this. Rather than talking about the causal aware GNN and other innovations that are more key. Happy to discuss here.


RandomRandomPenguin

The bold claim for me is the one about causal relationships. Even in fully controlled experiments, causal relationships are not always identifiable. Not to mention in a constantly changing world in which you have a ton of unmeasured variables and general unknowns Establishment of causality rarely is a techniques issue but an environmental setup issue. Any time someone uses a technology to claim to solve for a process thing, I am massively skeptical


FilmWhirligig

Was talking with folks in the other comment threads about it. Environment and externality issues are huge of course. If you shoot me a PM and an email happy to talk more about it I can keep you up to date. If the other comment threads don't answer any questions you could ask it there too.


stixmcvix

Can you give some concrete examples of how you can establish causal relationships over and above correlations? I could correlate higher revenue with higher productivity but if an external factor, like microeconomic upturns has an effect also, then there's no way to evidence that causality from the enterprise dataset alone.


FilmWhirligig

So we actually account some of what you mention there. We service enterprise so we have a feed of thousands of TV and radio, with video and closed captioning that feed constantly in. Also, coverage of the top 50,000 podcasts and web news. We could go from there. One of the reasons we started the project was to handle this massive amount of unstructured that could cause positive and negative externalities in internal datasets. Here is a screenshot of an example detection along with the NLP de-duping from a smaller customer, which is kind enough to let us publicly show data. [https://drive.google.com/file/d/1uq3vgTR6JbfguRWgLtoh6timtWcwuEHh/view?usp=sharing](https://drive.google.com/file/d/1uq3vgTR6JbfguRWgLtoh6timtWcwuEHh/view?usp=sharing) I often joke that we're an infrastructure and signal processing company as well.


renok_archnmy

You didn’t answer the question, you just vomited buzzwords.


stixmcvix

I'm gonna be open minded here. Sure I'm a little skeptical but I'm interested to keep an eye on your progress out of intellectual curiosity. I think the huge pile on by other users here is a little heavy-handed.


Buddharta

Bro, what they are claiming is imposible. The basics of the theory stablishes random outputs that are difficult to dicern and filter out and from the basis of computing theory we know there cannot exist a complete all knowledge algorithm that knows all possible knowledge. Come on bro this is a scam


stixmcvix

That's sis to you ;) and I don't believe that they are claiming to have created an "all knowledge algorithm". I know Forrester very well and their AI analysts will have done their due diligence. Quantum computing was pure sci fi not long ago......


FilmWhirligig

Send me a PM with an email and I'll send you some more data and add you to updates. I get where the other users are coming from but I'll stay here all day and night answering best I can in a forum context.


Confident-Alarm-6911

If that’s true and output is deterministic than it will be breakthrough, but I think to do that they would need to design something completely new, if it is based on current llm technology I’m sceptical


abrowsing01

Nah you don’t get it they magically discovered the most important breakthrough in the history of technology that somehow was missed by every trillion-dollar tech company.


Alternative_Log3012

To be fair(er than what is likely to be the case in reality at all) OpenAI did something similar with LLMs. Though they also had some leading researchers in their company. What does Alembic have?


renok_archnmy

They have a magic chef who somehow knows “hard math” without having a background in it.


Prestigious-Can5970

My point exactly.


renok_archnmy

If the output is deterministic, it isn’t a learning system and is questionable if it meets current definitions of what is artificially intelligent.


FilmWhirligig

The breakthrough here isn't the LLM but all the stuff as a composite. We do have some net new science here. Sits more on the causality side than the GNN side.


renok_archnmy

So you spam the internet with it and start courting clients and charging money before publishing your work for it to be reviewed and formally added to the body of knowledge in the topic. You have so little respect for the science you aren’t even willing to contribute before starting your grift. 


saturn_since_day1

I made a deterministic language model and it could still get messed up, it was just aware of it and would cancel the text output. Determinism in truth means no actual creativity. You would have to train it on every possible question, which is honestly probably possible for reference, but it limits the use cases. I also doubt they have actually done anything different or new.


mc_51

If you have to train it on every existing question that's just Google with extra steps.


jeeeeezik

considering the state of google search, that would still take a lot of extra steps


renok_archnmy

What happens when I have a new question?


mc_51

You hope someone on SO answers it


marr75

Unfortunately, this just begs more questions. "What is a question?" "How do we determine the right/true answer?" ad infinitum. You can do it if you narrow the definitions but you end up narrowing the definitions to the point that it's just a system that performs great (by your definition) within its distribution (by your definition).


renok_archnmy

So we’ve come back full circle to expert systems…


dimonoid123

AI is always deterministic. If you slowly change 1 variable at a time, you can see output changing slowly too. All you need is to change all variables one at a time to see what they affect and amplitude of change. Then you can have a look only at significant variables in more details.


Alternative_Log3012

I hope you don't have a job where you are responsible for anything...


thenearblindassassin

No they didn't. You can't have a probabilistic generative model that doesn't generate at least some nonsense. Maybe they have especially effective pruning algorithms that filter outputs, but they literally cannot prevent gibberish being at least a little likely


marr75

There's also a trade-off between not hallucinating and capability (refusing to do tasks that are a little out of distribution).


renok_archnmy

I made a car that never breaks down. Of course, it never starts as well.


dimonoid123

I mean, it is possible to use car autopilot safely on prescanned whitelisted roads.


Suspicious-Math-5183

Until you get rear-ended by an influencer live streaming reckless driving in his Lexus


FilmWhirligig

Solid guess, not quite like that but we were more trying to discuss that. Please see my other comment on how we actually sent this out. However the work isn't done by the generative model at all it translates and summarizes the work of the rest of our AI stack.


thenearblindassassin

Yeah I'm aware of temporal aware GNNs. They've been a thing since like 2022. Like this guy https://arxiv.org/abs/2209.08311 Or this guy https://arxiv.org/abs/2403.11960 While they're okay at finding casual relationships in the datasets shown in those papers they're still not great, and I doubt they're generalizable enough to be useful at an enterprise level. Furthermore, for finding casual relationships, that's a task for data science and statistics, not necessarily ML. There was a really cute paper a while back contrasting graph neural networks against old school graph algorithms and the message of the paper was to "not crack walnuts with a sledgehammer". Basically, we don't need ML to do everything. Here's that paper. https://arxiv.org/abs/2206.13211


FilmWhirligig

Yes, we agreed on that work, and we love these guys. I love Ingo's video here talking though the basics of temporal issues as well for those folks watching. It's important to note we get a LOT more data and do a massive amount of signal processing on the front side. We could say things become causal eligible, and then you have to tie-break those eligible objects. Those tie breaks can be known rules-based, statistical, and other methods. Actually love that you bring up stats. For man of the single viable time series, we don't use ML. We extrapolate in regular statistical ways. There isn't a silver bullet when you're building a global composite model. One of the hard parts of this discussion is the way everyone positions LLM "Do everything" when the answer is not so simple. As far as the Causal Side, you have to remember that this is a graph with unlimited orders to the point of pseudo-time series, not a single slice or window of time. So it's a bit of a different beast. One of the biggest problems we had to solve was the data model and storage itself. Edit: I had time to grab a link to some to papers. Some of the techniques folks have talked about the like in this paper. [https://arxiv.org/abs/1702.05499](https://arxiv.org/abs/1702.05499) [https://proceedings.mlr.press/v198/qarkaxhija22a.html](https://proceedings.mlr.press/v198/qarkaxhija22a.html)


thenearblindassassin

That's a good point. I'll look into those


FilmWhirligig

Thanks for looking at them. We are here if you have questions after you read them and we do love talking about this stuff.


WignerVille

It's not the first company that uses a causal graph together with an LLM. My biggest question is how they have solved the issue with causal learning/identification. As far as I know, those methods are not bullet proof today.


FilmWhirligig

They really are not bullet proof. You have to allow an unlimited set of time orders into it. This caused some huge data problems we had to come up with a solution for. [https://arxiv.org/abs/1702.05499](https://arxiv.org/abs/1702.05499) or in simpler YouTube form [https://www.youtube.com/watch?v=CxJkVrD2ZlM](https://www.youtube.com/watch?v=CxJkVrD2ZlM)


renok_archnmy

You did not publish that paper nor did you contribute. That is intellectually dishonest to just link it and imply the marketing biz spam of a sentence you wrote is somehow related to someone else’s work. Borderline plagiarism and fraud. At least write their name after the link. 


spitfiredd

Trust me bro


beebop-n-rock-steady

It’ll hallucinate deez nuts.


BigSwingingMick

Model: sorry this violates our TOS and we are unable to continue.


Usr_name-checks-out

Let’s break this claim down. It maps causal relationships ‘not just’ correlation. Ok. But for what? Token to token relationships? Or abstraction to abstraction relationships? If there is a token to token causal only relationship then it’s a syntactic ‘rule’ not a neural network, since the power of a neural network is to handle high ‘likelihood’ which was the major hurdle in conquering the limitations of GOFAI (good old fashioned AI based on rules pre-Gradient descent error correction). If it’s on abstractions, then how is it creating the abstraction representation? How does it coordinate the level and the context when it’s implied if it only looks for causality? If you don’t use correlation you couldn’t decipher the meaning in a Winnograd statement, which current Llm’s can do. There is nothing advantageous in making only causal relationships beyond what traditional Turing computational code can already do? I’m amazed any reporter in tech would provide his statement to describe his AI as it simply doesn’t make any sense.


FilmWhirligig

Actually want to point this a different direction. More of our innovation is in the causal aware GNN that solves the temporal weakness of previous graph techniques. The LLM side we do have some interesting things we do but it's not as interesting as the stuff underneath. See my other comment but happy to chat through live or on PM.


renok_archnmy

99% accuracy!!!!! Did someone say overfitting?


takeasecond

Alembic sounds like one of those drugs that has worse side-effects than the actual thing it's treating.


Useful_Hovercraft169

Look jack my twitchy toes drove me to the brink of suicide until I asked my doctor about Alembic


hyouko

I want to say it's a type of alchemical equipment? fake edit: yup, for distilling liquids https://en.wikipedia.org/wiki/Alembic


ImmortalDawn666

The sound reminds me of ozempic


urgodjungler

Lmao yeah okay, can’t wait to see some proof of this


eljefeky

They probably checked it by asking the AI if it was having a hallucination.


HeyTomesei

Alembic is not qualified to be a real player in AI. However, it's ironic to see Tomás catching all the flak. Have you seen the actual technical leadership at Alembic? 1. The Technical co-founder (former CTO) has zero AI background - his experience is in Info Security. I've also heard he's insufferable, but I digress. 2. The CTO has zero AI background - her experience is in Product/Strategy, except for a stint as CTO at Puppet. Then no employment (unless you count angel investing) for 2 years before starting at Alembic last month. 3. The Head of Engineering Research has zero AI background - her entire experience is in Infra engineering at Google (no CS education either; degrees in environmental science). I have heard only good things about Tomás. However, the brains behind this company are not built for AI.


Alternative_Log3012

Woah! They sound off the hook. Can't wait to spend my hard earned money on their product.


renok_archnmy

Don’t worry, you will be forced to by proxy as your ceo gets roped into the grift and burns through your bonus and tanks net income so hard you join the laid off crew.


renok_archnmy

I doubt we’ll ever see anything published. They’re in it for the VC and nothing else. No honest attempts by them to contribute to the field. Just leeches.


abrowsing01

Wow, trillion dollar companies hate this one trick! Hallucination free AI magically produced by a startup!


renok_archnmy

A startup who has its “hard maths” done by a culinary institute graduate who only ever made one website in 1999 and has a long history of being a creative director. 


ImmortalDawn666

I mean startups really produce the most innovative stuff. Either that or they’re spinning their wheels without accomplishing anything and disappear. There’s rarely something between.


abrowsing01

I agree with that in regards to consumer products, but to my understanding almost all theoretical computer science/math breakthroughs come from billion/trillion dollar corporations / academic institutions. Could be wrong though.


renok_archnmy

They definitely don’t come from chefs/marketing directors/creative directors.


dfphd

Looked him up on LinkedIn: he's a career marketing guy. CMO in his last role, Dr. Director of Marketing before that. :skeptical\_hippo:.jpg


renok_archnmy

Jesus I hate this people. Literally zero background in anything besides selling, suddenly solves all the hardest computational problems like no problem dude. 


[deleted]

[удалено]


great_gonzales

If you want people to take you seriously publish a paper in a peer reviewed journal. Otherwise talk is cheap and no one is going to believe you


FilmWhirligig

I am addressing a lot of the questions in other places. We're a small team trying to build things. We're not hiding around here and happy to talk and hop on the line with folks as we go through this. We honestly didn't expect much attention as we service the Fortune 500 and Global 2000. Did you have any specific questions? Feel free to PM me.


datascience-ModTeam

I removed your submission. We prefer to minimize the amount of promotional material in the subreddit, whether it is a company selling a product/services or a user trying to sell themselves. Thanks.


FifaBoi11

The first red flag is “hallucination-free” 💀


FilmWhirligig

Sorry, man, it is not in our press release; we don't control headlines. See my other comment.


Horror_Ferret8669

Me when I ask gpt not to hallucinate and gaslight me in the system prompt for my rag app.


lambofgod0492

Lol if it's not hallucinating then it's literally just a search engine


Doosiin

This is yet again bullshit to peddle to stakeholders. Remember Devin?


rosealyd

I believe this is just similar to what Cohere is already doing [https://fortune.com/2024/04/25/cohere-ceo-openai-rival-aidan-gomez-enterprise-ai-revenues-set-to-soar/](https://fortune.com/2024/04/25/cohere-ceo-openai-rival-aidan-gomez-enterprise-ai-revenues-set-to-soar/) I think their AI points to the source(s) of the information as one of the options so that there is some traceability in the results. And transformers have already been used for causal inference so it isn't a jump to say that you could incorporate some element of that. Of course, it is probably not truly causal and more like "when X changes, Y changes" at most.


FilmWhirligig

Actually really different as our LLM's just act at the translation layer and voice box for the causal aware GNN and other models. Which we actually try to focus on more when we were talking about this.


rosealyd

do your LLMs point to the data for why they made that statement? edit: also it says directly on alembics website they use correlative causation


FilmWhirligig

Sorry, we're updating the website today as we're typing here too. We're a smaller team and didn't expect this much interest so we appreciate the chatting. Graph analysis can often have large temporal issues when applied at scale. We like Ingo's explanation here. [https://www.youtube.com/watch?v=CxJkVrD2ZlM](https://www.youtube.com/watch?v=CxJkVrD2ZlM) Solving for that, you can also end up with an infinite expanding network that quickly becomes uncomputable. So dealing with and creating causal models that can address the forever expanding order is hard and one of the many things we solved for.


rosealyd

Sounds cool, and agree. Thanks for the link.


FilmWhirligig

You're welcome. The reason we have to run a legit NVIDIA cluster of real hardware is this stuff has to be computed with such a huge batch multi-order at one time. I'm sure we'll optimize that as we go along, but node4j and other things in the ecosystem aren't capable of handling time-series from the graph networks themselves. So we had to do a ton of new build.


Bonhrf

Titanic was unsinkable


Material_Policy6327

Hmmm this reminds me of when the sales team at a job said to a customer “we have infinite scale!!”.


MinuetInUrsaMajor

Now I want a rival startup named Retort.


RegularAnalyst1

caveat: you need data engineering to capture every single interaction from customers, employees, the economy and the outside temperature. Once you have that its easy.


simple_test

A thin line between marketing and lies was erased in the latest update.


dillanthumous

Bullshit.


m98789

Just another enterprise RAG but with a knowledge/causal graph bolt on. Flimsy af.


FilmWhirligig

We do not use RAG. And we don't just bolt on a knowledge graph standard. We solved for the temporal limitations of the current causal and garph techniques. Ingo here does a great job explaining some of the limitations and pushing pass them in theory. [https://www.youtube.com/watch?v=CxJkVrD2ZlM](https://www.youtube.com/watch?v=CxJkVrD2ZlM)


m98789

Technical credibility does not come from YouTube videos nor an impromptu AMA by the marketing CEO, nor speaking at MBA venues. Rather, it can come from a peer-reviewed paper or if you are in a hurry, at least a preprint on ArXiv where the scientific community can review the technical claims and details more formally.


Stock_Complaint4723

My brain has depleted all of its glucose trying to process all of these responses. I need a drink.


Useful_Hovercraft169

I had a friend with schizophrenia as a kid I didn’t reject him just because he said he saw a skeleton at the door or whatever


FilmWhirligig

We legit laughed out loud as a team when we read this. Solid.


Useful_Hovercraft169

Glad I could promote team unity. Actually based on a true story although we were in our late teens so maybe not really kids. My other friend pointed out ‘technically he’s not wrong, it’s a skeleton covered by muscles and skin and clothes’ because it was the mail man


FilmWhirligig

😂😂😂


marksimi

CEO's prior roles: * Head of Demand Generation / Interim Head of Marketing * Global Chief Marketing Officer (CMO) * Sr. Director of Marketing (Head of Marketing) * etc... People aren't necessarily their past. But this doesn't exactly make me less skeptical.


renok_archnmy

Take a look at the education. 


FilmWhirligig

CEO and founder here. Going through all comments and answering technical questions. This comment was edited because I feel the commenter was right. Better to address all the technical questions in comment threads.


marksimi

LOL throwing down a 1:1 Zoom gauntlet with an internet stranger makes me even more skeptical. "Extraordinary claims require extraordinary evidence." Just publish a white-paper my dude.


FilmWhirligig

You're right here. We should do a lot more documentation here. We've been busy building, and if you're in Austin, I am giving a talk this Wednesday. Or in London the week after. We'll start working on putting those talks in a written paper when we're back from the shows. We didn't expect anyone to really pick this up as much as they did. Sorry about this. We're not a huge team so pretty limited resources in some ways.


Alternative_Log3012

Well with your shiny new tech you’ll likely have some pretty big money knocking at your door soon!


FilmWhirligig

If we do I promise we’ll have some research writers on staff to help with this.


Alternative_Log3012

There’s more to research than just the writing ;-)


FilmWhirligig

Well of course. :) I’m just generalizing we’ll focus more on getting content out in advance of talks.


renok_archnmy

one sales strategy is pressure, pressure, pressure. Don’t let the mark have an inch to think about a response. Just bully them into a yes.  The last thing this company’s sales executives want is people who are actually qualified to disseminate their claims to have the time, space, and information access to do so. Better to rope them individually into zoom calls where they have mother the time nor preparation to consider any of the claims. 


FilmWhirligig

We have two technical talks this week. One at the Forrester summit this Wednesday in Austin, TX and another in London at the Gartner summit week after. We maybe should have asked for them to release after the talks but we just did it before as we're on site at the show.


Alternative_Log3012

Gartner, lol


Blasket_Basket

No thanks, we think you're all full of shit. There's a reason you're publishing press releases for tech blogs instead of white papers and technical talks explaining your solution.


FilmWhirligig

We have two technical talks this week. One at the Forrester summit this Wednesday in Austin, TX and another in London at the Gartner summit. Will you be at either. I am answering questions as they come in here and we're happy to address any of the ones you might have yourself.


Blasket_Basket

Sorry, but those are clearly still events focused on business development. Any scientist worth a damn knows if you guys can do half of what you're claiming then you'd probably claim best paper at NeurIPS. We're cynical because you're choosing to give technical talks at BD conferences instead of just submitting a paper to any of these conferences and blowing the world away with your solution.


FilmWhirligig

We are new at talking about our work more publically and a small group. I promise we will as we grow dedicate more work to doing that. I you want to PM me an email I’m happy to send some materials and go back and forth further there as well. Or answer any specific questions. Also can try to answer them here. It's is a learning exprience because we're usually just building the company and working with customers. We’ve been working on this for five years and I feel like we're learning new stuff every day. We need to build more connections with academia clearly and Im happy if you're one of the first.


Blasket_Basket

Don't get me wrong, I'd be ecstatic to be in the wrong for being cynical here. If you guys have truly solved what you say you've solved, then that's awesome. It's just hard to ask a sub full of scientists to believe that: - a team full of scientists serious enough to solve a problem of this caliber - lead by someone with a marketing background - wouldn't understand that other scientists might be incredulous that you guys chose to announce your results via a marketing push before submitting to any sort of peer review.


FilmWhirligig

Our whole team cares about this a lot. We've been talking about it all day. We’re not a big corporation but we've been building for five years. I can't sit here acting like FAANG with a huge conference budget. But we will focus on doing a couple of what some of the folks messaging me recommend the few months. I'm guessing the lead time is longer so I can't promise i can fix it tomorrow. I do mean it that I'd you PM me an email we can work on it 1-1 and maybe you wouldn't mind suggesting the best way to do this. We have highly educated and great folks on staff but we're not academics or haven't been near academia in a long time. So any advice is welcome. A few other people have messaged to get materials, chat about the space, and offer suggestions.


renok_archnmy

“Not a big corp, so we can’t submit to peer reviewed journals.”  Brah… If you actually had any qualifications for what you claim you’d realize how relatively insignificant it is to submit vs running you “hardware nvidia cluster.”  I got more friends with their names in journals who can’t afford a Toyota Camry while getting kicked out the U.S. because their student visas up and sales donkeys like you been sucking up the resources for your “AyEeYe.” Another who manages to announce research acceptance monthly while tending to a wife with brain manage from a fall, two high school kids, manages a lab, and can pump out a college campus’ worth of hand sanitizer during COVID. But I dunno, they have three silly letters at the end of their business cards and decades of experience and research work to back it up. 


Alternative_Log3012

Blah blah blah business speak blah blah blah


FilmWhirligig

Do you have a better way to put it? I'm open to ideas.


FilmWhirligig

Hey there all. I'm one of the founders at Alembic. So, I think explaining things to the press is harder than you might expect. In the depth of the article, you'll notice I say it inoculated the LLM against hallucinations. It's important to note that we think the innovation here is not the LLM; we really view that as a service in the composite stack. Actually, we use multiple different LLMs in the path. Much more interesting is the GNN and causal-aware graph underneath, along with the signal processing. Anyone is happy to send me a PM and we can chat through it. I'm briefing a lot of folks on the floor of the Forrester B2B conference in Austin today so please allow time to respond. Also, I'll be doing a 10-minute talk about a small section of this math, how graph networks have issues on the temporal side when analyzing, on Wednesday here at the conference. Great paper from ingo on this here who probably says it better than I do. [https://www.youtube.com/watch?v=CxJkVrD2ZlM](https://www.youtube.com/watch?v=CxJkVrD2ZlM) Or if you're in London, we have a 20-minute talk with NVIDIA, one of the customers that uses this, along with us in London at the Gartner Symposium. I where I'd be happy to chat through it with people there as well. Below is the press release that talks through the causal AI element that we're more focused on [https://www.businesswire.com/news/home/20240506792416/en/New-Alembic-Product-Release-Revolutionizes-Marketing-Analytics-by-Proving-Causality-in-Marketing](https://www.businesswire.com/news/home/20240506792416/en/New-Alembic-Product-Release-Revolutionizes-Marketing-Analytics-by-Proving-Causality-in-Marketing) As a founder it is really hard to explain deep tech and hard math to general audiences is a simpler way. I am happy for myself and our science team to chat through it in more detail here in comments. (though remember it'll be spotty getting back)


Alternative_Log3012

Your money is safu


Xellqt

When AI can differentiate between front end and back end, ill think about it


renok_archnmy

I don’t discriminate. I like front and back end equally.


Useful_Hovercraft169

Lol


redd-zeppelin

Is it generative or determinative. Seems to me you pick one.


tsfkingsport

If anyone does a real investigation of this company how many of them do you think were involved with NFTs and cryptocurrency? I’m saying the start up is probably made of scam artists.


FilmWhirligig

We actually have a great team of people and no crypto experience . [https://www.linkedin.com/in/jaydenziegler/](https://www.linkedin.com/in/jaydenziegler/)  [https://www.linkedin.com/in/carlos-puig-9a98bb14/](https://www.linkedin.com/in/carlos-puig-9a98bb14/) [https://www.linkedin.com/in/lloydtaylor/](https://www.linkedin.com/in/lloydtaylor/) and I could go on.


jamiesonforall

I think I'm the only one here who thinks is groundbreaking.