Computing gradient is one of the most fundamental operation in deep learning. It is used in back propagation step while we are updating weights of network
Next is a simple example of how pytorch do gradient computaion
Sending context to LLM before it generate response
importollama# Define the context and questioncontext="""Indian shooting legend Abhinav Bindra believes that despite the country having a population of 1.4 billion people, the “playing population” of India remains significantly less.Bindra was speaking in the aftermath of India’s 2024 Paris Olympics campaign, where the nation won just six medals (one silver and five bronze), despite much being expected from a majority of the athletes.And on another six occasions in the Paris Games, Indian athletes endured fourth-place finishes. Had those fourth-place finishes been converted into medals, India’s medals tally might very well have gone into double digits."""question="How many medals india won in olympic 2024? use the given context only to answer my question"# Structure the messagesmessages=[{"role":"system","content":context},{"role":"user","content":question}]# Make the API callresponse=ollama.chat(model="llama3 …
I always wanted some solution on my laptop where i can communicate to my laptop like a human, like taking to an assitant in natural language and getting things done.
With the advent of LLMs or transformers in general we have got access to AI agents which can make us 2-10X more productive.
I am interested in building small and smart solutions for my personal laptop which make me x% more productive, in the same direction i am going to work on a tool which son't require a graphic card to run but still respond to my text queries in acceptable time frame.
How i am going to use this tool
- searching some text in PDFs available on my laptop
- searching for new items in the rss feeds already synced to my laptop
- searching code snippets
- n other ways i am …