1. Using Pytorch to compute gradient

    Computing gradient using pytorch

    Computing gradient is one of the most fundamental operation in deep learning. It is used in back propagation step while we are updating weights of network

    Next is a simple example of how pytorch do gradient computaion

    $$ y = 3x^2 + 2x + 1 $$

    Compute gradient for x=2

    import torch
    x = torch.tensor([[2.]], requires_grad=True)
    y = 3*(x**2) + 2*x +1
    y.backward()
    x.grad
    

    Output: tensor([[14.]])

    read more
  2. How to use Ollama python library to give context before query

    Sending context to LLM before it generate response

    import ollama
    
    # Define the context and question
    context = """
    Indian shooting legend Abhinav Bindra believes that despite the country having a population of 1.4 billion people, the “playing population” of India remains significantly less.
    
    Bindra was speaking in the aftermath of India’s 2024 Paris Olympics campaign, where the nation won just six medals (one silver and five bronze), despite much being expected from a majority of the athletes.
    
    And on another six occasions in the Paris Games, Indian athletes endured fourth-place finishes. Had those fourth-place finishes been converted into medals, India’s medals tally might very well have gone into double digits.
    
    """
    question = "How many medals india won in olympic 2024? use the given context only to answer my question"
    
    # Structure the messages
    messages = [
        {"role": "system", "content": context},
        {"role": "user", "content": question}
    ]
    
    # Make the API call
    response = ollama.chat(model="llama3 …
    read more
  3. Using Chroma DB as a search engine

    ChromaDB an out of the box semantic search engine

    I always wanted some solution on my laptop where i can communicate to my laptop like a human, like taking to an assitant in natural language and getting things done.
    With the advent of LLMs or transformers in general we have got access to AI agents which can make us 2-10X more productive.

    I am interested in building small and smart solutions for my personal laptop which make me x% more productive, in the same direction i am going to work on a tool which son't require a graphic card to run but still respond to my text queries in acceptable time frame.

    How i am going to use this tool - searching some text in PDFs available on my laptop - searching for new items in the rss feeds already synced to my laptop - searching code snippets - n other ways i am …

    read more

links

social