🪩 Core Concepts#

A Bit History

  • At the 2017 NeurIPS conference, Google researchers introduced the transformer architecture in their landmark paper “Attention Is All You Need”.

  • The following year in 2018, BERT was introduced and quickly became famous in the tech world.

  • Although decoder-only GPT-1 was introduced in 2018, it was GPT-2 in 2019 that caught widespread attention. After that they released GPT-3 and GPT-4.

  • Since 2022, source-available models have been gaining popularity, especially at first with BLOOM and LLaMA, though both have restrictions on the field of use.

  • Througout the time, LLM have a lot of issues such as Hallucination, Low quality response and many more. That’s when The RAG models, LLM observability and monitoring came into the picture.

Understand the importance of evaluation and monitoring of the LLM and RAG.

Please Note

The content discussed in this blog regarding LLM (Large language model) and RAG (Retrieval-augmented generation) covers only the fundamental concepts. For a comprehensive understanding and advanced insights, it is recommended to consult additional resources.


🌟 LLM

What and why use Large Language Models (LLM) for various business usecases. Learn more.

♻️ RAG model

RAG (Retrieval Augmented Generation) is a framework that boosts the accuracy of GenAI. Learn more.

📏 Evaluation

There are multiple customizations available in the evaluation process. Learn more

🎯 Why ragrank

What and why we build ragrank, what is the motive and intentions. Learn more.