Skip to content

Data News — Week 24.30

Data News #24.30 — TV shopping for foundational models (OpenAI, Mistral, Meta, Microsoft, HF), BigQuery newly released stuff, and more obviously.

Christophe Blefari
Christophe Blefari
6 min read
a view of a city at sunset from a high rise
Tallinn (credits)

Dear members, it's Summer Data News, the only news you can consume by the pool, the beach or at the office—if you're not lucky. This week, I'm writing from the Baltics, nomading a bit in Eastern and Northern Europe.

I'm pleased to announce that we have successfully closed the CfP for Forward Data Conf, we received nearly 100 submissions and the program committee is currently reviewing all submissions. Many thanks to all people who trusted us and submitted a talk for the conference (especially the DN members!).

We also announced our first guest speaker, Joe Reis. Joe is a great speaker, he wrote Fundamentals of Data Engineering, which is one of the bibles in data engineering and I can't wait to hear him at Forward Data. He his currently writing his second book about data modeling.

Forward Data is a 1-day conference I will co-organise on November 25th, in Paris. It will be a day to shape the future of the data community, where teams can come to learn and grow together.

AI News 🤖

Some days, AI News is like a TV shopping show. Over the past two weeks, a few dozen models have been released, and I'd like to introduce them to you.

New models & stuff

  • OpenAI — OpenAI is trying to continue leading the charge by releasing models like Apple products.
    • GPT-4o mini: advancing cost-efficient intelligence — After GPT-4o which brought great performance and became the new flagship model, available in the free tier, OpenAI released a smaller version of it, the mini. According to the benchmark GPT-4o mini is close to GPT-4o performance but best in class among the small models. Even if OpenAI did not disclose how small it is, a few people are claiming it's a 8B.
    • Fine-tune GPT-4o for free — Until September 23, 2024 GPT-4o mini is free to fine-tune. This means each organization will get 2M tokens per 24 hour period to train the model and any overage will be charged at $3.00/1M tokens. Worth trying [docs].
    • SearchGPT, new OpenAI product —Yesterday, OpenAI unveiled their latest product, SearchGPT, a prototype AI search application. The system generates answers while providing reliable sources. This announcement coincides with Google Search's recent report of a 11% increase in revenue for the last quarter, reaching $64 billion. It shows that search did not disappeared with the advent of GPTs.
  • Meta — This is crazy how Meta who suffered an unintentionally leak of LLaMA weights on torrents 1 year ago is now the company advocating for open models and leading this part of the ecosystem.
    • LLaMA 3.1 is out — The model is out in 3 versions, the largest one with 405B and 2 smaller ones (70B and 8B). They even released a 92 -pages whitepaper explaining how they trained it, the expected performances and what you can do with it. How dear fried Mark even wrote an ode to the open source with some kind of manifesto. The announcement is like a summary if you want it short.
    • Meta won’t release its multimodal Llama AI model in the EU — It would have been perfect if Meta was complying with the rules but in the end Meta is Meta. Doing lobbying and like a crying kid we punish for being bad they announce they will not release they super multimodal AI in Europe because regulatory environment is too "unpredictable". How to say that they use training data they should not have used.

      Last point related to tech giants (Apple, Nvidia, Salesforce) stealing YouTube subtitles to train foundational models.
    • Meta Chameleon — Finally Chameleon is available on HuggingFace, it's Meta Mixed-Modal Early-Fusion Foundation, which actually means it can understand and generate text and images.
  • Mistral — The US-French company is keeping the rhythm with other giants going back on open models.
    • MathΣtral — A 7B model for math reasoning and scientific discovery under Apache license. I'll try it soon for something I'm cooking.
    • Codestral Mamba — A 7B model for code generation under Apache license.
    • Mistral Large 2 — Competing directly with the large LLaMA 3.1, Mistral Large 2 is 123B parameters and is at the moment the closest model to GPT-4o which still sets the benchmark.
  • Microsoft Phi-3 models — Microsoft continues to try hard at the game with their Phi-3 models available in Azure. But who cares?
  • SmolLM - blazingly fast and remarkably powerful — HuggingFace released new state-of-the-art small models (135M, 360M and 1.7B parameters) trained on a open corpus.

Articles

Because AI and GenAI is not only about models, a few great articles have been written as well.

Fast News ⚡

Because the fast news are always the best.

Data economy 💰


See you next week ❤️

Data News

Data Explorer

The hub to explore Data News links

Search and bookmark more than 2500 links

Explore

Christophe Blefari

Staff Data Engineer. I like 🚲, 🪴 and 🎮. I can do everything with data, just ask.

Comments


Related Posts

Members Public

Data News — Week 24.45

Data News #24.45 — dlt Paris meetup and Forward Data Conference approaching soon, SearchGPT, new Mistral API, dbt Coalesce and announcements and more.

Members Public

Data News — Week 24.40

Data News #24.40 — Back in Paris, Forward Data Conference program is out, OpenAI and Meta new stuff, DuckCon and a lot of things.