Categories
ai cloud

How Many ‘Copilots’ Do We Need?

Dear companies, please stop using the term “Copilot” for everything. It is honestly confusing and doesn’t help anyone. Imagine a conversation like this:

Person A: Hey, have you tried Copilot? It's honestly amazing!
Person B: Yeah! I was using it to finish writing my Python assignment in half the time!
Person C: I didn't know you can do that? I was using Copilot to setup my AWS ECS services
Person D: Wait what? Isn't Copilot for summarizing emails and drafting Word documents?

¯\_(ツ)_/¯
How many “Copilots” are out there really? Let’s see.

Github Copilot

This should be familiar to developers.

GitHub Copilot is an AI coding assistant that helps you write code faster and with less effort, allowing you to focus more energy on problem solving and collaboration.

From the website

In short: Coding assistant

AWS Copilot

This one is less well-known, unless you are a AWS power user.

AWS Copilot is an open source command line interface that makes it easy for developers to build, release, and operate production ready containerized applications on AWS App Runner, Amazon ECS, and AWS Fargate.

From the website

To be fair, AWS Copilot was launched in 2020, before everyone started using the term to mean some form of AI assistant.

In short: Another AWS command line tool

It gets more confusing from here.

Microsoft Copilot (formerly known as Bing Chat, or Bing AI)

I can’t actually find an official definition for Microsoft Copilot. Lol.
As of Nov 2024, if you do a search and end up in the above page, all you see is a chat interface and page titled “Microsoft Copilot – Your AI companion”.

In short: ChatGPT variant

Note that if you sign in using your work Microsoft account, you will be unable to use Microsoft Copilot. You must either:

  1. Sign in with a personal account, or
  2. Sign out completely, or
  3. Use Microsoft 365 Copilot (see next section)

In addition, the features you can use in (1) and (2) are slightly different. Sounds confusing? Let’s go on.

Microsoft 365 Copilot

The official page for Microsoft 365 Copilot is full of marketing-speak. It doesn’t help that different features are available depending on the region you are in. For the purpose of this article, we’ll go with the US version.

Based on info from the page, Microsoft 365 Copilot is an AI-powered virtual assistant that is integrated into Microsoft 365 apps like Word, Excel, PowerPoint, and Outlook. The gist of what is included is covered in the pricing section – a per-user license is required. When you meet others at work talking about “Copilot”, they are mostly likely referring to this.

In short: Clippy on steroids

Microsoft Dynamics 365 Copilot

There’s not a lot of content on this. Just a blog post from Microsoft. From what I can tell, Microsoft Dynamics 365 Copilot is an add-on to Microsoft Dynamics 365 range of products that deals with CRM and ERP.

In short: A chat interface for Dynamics 365

Microsoft Copilot in Azure

This newly launched feature is in preview, and the official website says: Simplify operations and management from cloud to edge with an AI companion.

In short: FAQ for Azure

Actually there are more, like this and that, and also not forgetting this. And I’m sure I’m missing quite a few others as well. Hopefully we will quickly move past this fad and avoid situations like the hypothetical conversion at the beginning of this article.

Or if you are still confused, you can always ask “Copilot”. 🙂

Categories
ai cloud

AWS AI Certifications

AWS is launching not one, but two new AI certifications, as demand for AI skills skyrockets.

For those who are unaware, AWS certification comes in 4 levels, roughly in terms of difficulty/professional experience required:

  • Foundational
  • Associate
  • Professional
  • Specialty

Previously the only AWS AI certification available is their specialty one: AWS Certified Machine Learning – Specialty, which is aimed at data scientists and ML engineers. To fill the intermediate gaps, AWS will be launching the following certifications:

(Foundational) AWS Certificated AI Practitioner
(Associate) AWS Certified Machine Learning Engineer – Associate

The former is aimed at generalists (business analyst, product or project manager, sales professional) whereas the latter is targeting developers/engineers doing ML work but whom may not be full-time ML specialists.

These new certifications are marked as beta, so expect syllabus and content to change. They are currently priced at a discounted price of
USD75. The usual foundational certifications are USD100 and associate ones are USD150, so this is quite a good offer.

Registration for the new certification opens on 13 Aug 2024 and you can use Skill Builder resources listed in the certification page to prepare for it.

Categories
ai cloud

Build RAG applications with MongoDB Atlas, now available in Knowledge Bases for Amazon Bedrock | AWS News Blog

More vector database choices for Amazon Bedrock Knowledge Base. This might make sense if you are already using MongoDB Atlas.

MongoDB Atlas vector store in Knowledge Bases for Amazon Bedrock is available in the US East (N. Virginia) and US West (Oregon) Regions. Be sure to check the full Region list for future updates.

Categories
ai cloud

Amazon Bedrock Knowledge Base – Part 3 (SDK)

Since the last writeup, AWS has added support for Anthropic Claude 3 model to AWS Bedrock Knowledge Base (ABKB). It has also added the ability to add your own metadata to your source files in order to perform filtering when doing query. For example, you may want to add a metadata to a certain set of files to indicate that they are from year 2023. Then during your query, you can include a filter to indicate you only want to use data from year 2023. This provides another set of tools for developers to create more relevant and targeted query. Note that filtering is only supported for FAISS vector engine.

If you’re looking to integrate ABKB into your code, there are two primary methods: using one of the AWS SDK or interacting through HTTP API. In this article, we will be using Boto3, the AWS SDK for Python. Here is a simple example to do a retrieve and generate query using Boto3. This example uses the new Claude 3 Sonnet model.

import boto3
import json

AWS_ACCESS_KEY="_your_access_key_"
AWS_SECRET_KEY="_your_secret_key_"
REGION_NAME="_your_region_"

client = boto3.client('bedrock-agent-runtime',
                      aws_access_key_id=AWS_ACCESS_KEY,
                      aws_secret_access_key=AWS_SECRET_KEY,
                      region_name=REGION_NAME
)

# retrieval and generate
response = client.retrieve_and_generate(
    input={
        'text': 'how to apply for leave'
    },
    retrieveAndGenerateConfiguration={
        'knowledgeBaseConfiguration': {
            'knowledgeBaseId': 'LEBQPJQ9BY',
            'modelArn': 'arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-sonnet-20240229-v1:0',
            'retrievalConfiguration': {
                'vectorSearchConfiguration': {
                    'overrideSearchType': 'HYBRID'
                }
            }
        },
        'type': 'KNOWLEDGE_BASE'
    }
)

print(json.dumps(response))

Running the code produces the following output in JSON:

{
  "ResponseMetadata": {
		...trimmed...
  },
  "citations": [
    {
      "generatedResponsePart": {
        "textResponsePart": {
          "span": {
            "end": 705,
            "start": 364
          },
          "text": "...trimmed..."
        }
      },
      "retrievedReferences": [
        {
          "content": {
            "text": "...trimmed..."
          },
          "location": {
            "s3Location": {
              "uri": "s3://...trimmed..."
            },
            "type": "S3"
          }
        }
      ]
    }
  ],
  "output": {
    "text": "To apply for leave as an employee on the Workday mobile app:\n\n1. Navigate to your Workday Mobile Homepage and select 'View Applications' under 'Frequently Used'\n2. Select 'Time Off'\n3. Select the date(s) you want to apply for leave\n4. Select 'Next' and choose the leave type\n5. Select any required reasons or upload attachments if applicable\n6. Submit the request To apply for leave as an employee on the Workday desktop:\n\n1. Go to the Workday Homepage and select the 'Absence' worklet\n2. Under 'Request', select 'Request Absence'\n3. Select the date(s) for the leave and click 'Request Absence'\n4. Choose the leave type\n5. Select 'Next' and provide any required reasons or attachments\n6. Submit the request"
  },
  "sessionId": "c8332417-df3c-41e5-8516-ad38cc09de15"
}

For this simple task there is not much difference in output from the various Claude models. I expect the differences will be more pronounced for complex tasks or those involving much larger context window.

With this, we conclude the three-part series on Amazon Bedrock Knowledge Base. I have covered everything from creating the knowledge base, testing it in the playground, to executing queries via CLI and SDK. Hopefully this gives a good overview of the processes involved and capabilities of this new service.

Categories
ai cloud

Amazon Bedrock Knowledge Base: Part 2 (CLI)

In Part 1, I showed how you can set up an Amazon Bedrock Knowledge Base (ABKB for short) using the AWS console. I also showed how you can perform queries against the knowledge base via the playground in AWS console. In this article, I will show how you can do the same thing via AWS CLI.

First, make sure you are using the latest version of the CLI. Otherwise some commands might not be available. To see if your CLI supports the commands, run

aws bedrock-agent-runtime help

It should return something like this:

BEDROCK-AGENT-RUNTIME()                                BEDROCK-AGENT-RUNTIME()



NAME
       bedrock-agent-runtime -

DESCRIPTION
       Amazon Bedrock Agent

AVAILABLE COMMANDS
       o help

       o retrieve

       o retrieve-and-generate

Next, make sure you have the access and secret keys configured in AWS CLI. You can do it via the usual aws configure but I usually do it in a profile since I have many AWS accounts/IAM users, eg. aws configure --profile demo. For convenience, I will use alias to use the new profile like this: alias aws='aws --profile=demo --region=us-east-1'

We can now test the retrieve command in CLI. To run the command, you will need the knowledge base ID. Strangely, there is no way to get this via the CLI 🤷.For now just copy the value from AWS console. Once that is done, you are ready to run the CLI command. Omitting optional/default parameters, this is an example of the simplest version of the command:

aws bedrock-agent-runtime retrieve \
--knowledge-base-id LEBQPJQ9BY \
--retrieval-query '{ "text": "how to apply for leave" }'

Retrieve performs a vector search using the query text and returns a list of matches with a score. As mentioned in Part 1, if you are implementing a custom RAG workflow, you can use the output of retrieve as the context for further prompting. Score ranges from 0-1, 1 being most relevant.

{
    "retrievalResults": [
        {
            "content": {
                "text": "<trimmed>"
            },
            "location": {
                "type": "S3",
                "s3Location": {
                    "uri": "s3://<trimmed>"
                }
            },
            "score": 0.75545114
        },
        {
            "content": {
                "text": "<trimmed>"
            },
            "location": {
                "type": "S3",
                "s3Location": {
                    "uri": "s3://<trimmed>"
                }
            },
            "score": 0.7345349
        },

(Note: output trimmed for brevity and sanitization)

Next we will test retrieve-and-generate command, which implements the fully managed RAG workflow.

Unlike some other CLI commands which uses model id, you will need the model ARN for querying. There is currently no way to get the model ARN from AWS console, so you will need to get it via another CLI command:

aws bedrock list-foundation-models

Not all models can be used in ABKB – at least for now. Stick to Claude Instant V1, V2, V2.1 and only use ON_DEMAND models. I made the mistake of choosing a PROVISIONED model and all I get is a cryptic error message. Yikes.

An error occurred (ValidationException) when calling the RetrieveAndGenerate operation: 1 validation error detected: Value 'arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-instant-v1:2:100k' at 'retrieveAndGenerateConfiguration.knowledgeBaseConfiguration.modelArn' failed to satisfy constraint: Member must satisfy regular expression pattern: (arn:aws(-[^:]+)?:bedrock:[a-z0-9-]{1,20}:(([0-9]{12}:custom-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}/[a-z0-9]{12})|(:foundation-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}([.:]?[a-z0-9-]{1,63}))|([0-9]{12}:provisioned-model/[a-z0-9]{12})))|([a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}([.:]?[a-z0-9-]{1,63}))|(([0-9a-zA-Z][_-]?)+)

With the right model ARN in hand, you are ready to execute the retrieve-and-generate command. Here is an example of the command you can execute:

aws bedrock-agent-runtime retrieve-and-generate \
--input '{ "text": "how to apply for leave" }' \
--retrieve-and-generate-configuration '
{
  "knowledgeBaseConfiguration": {
    "knowledgeBaseId": "LEBQPJQ9BY",
    "modelArn": "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-instant-v1",
    "retrievalConfiguration": {
      "vectorSearchConfiguration": {
        "overrideSearchType": "HYBRID"
      }
    }
  },
  "type": "KNOWLEDGE_BASE"
}
'

If all goes well, you will get an output like this:

{
    "sessionId": "0ff48086-f26f-4ebd-bb68-7c7bcd1e414a",
    "output": {
        "text": "To apply for leave, navigate to the Workday homepage and select the Absence worklet. Then select \"Request Absence\" and choose the date range and type of leave you want to apply for. You may need to provide additional details or attachments depending on the leave type. Finally, select \"Submit\" to complete the request."
    },
    "citations": [
        {
            "generatedResponsePart": {
                "textResponsePart": {
                    "text": "<trimmed>",
                    "span": {
                        "start": 0,
                        "end": 317
                    }
                }
            },
            "retrievedReferences": [
                {
                    "content": {
                        "text": "<trimmed>"
                    },
                    "location": {
                        "type": "S3",
                        "s3Location": {
                            "uri": "s3://<trimmed>"
                        }
                    }
                },

In an earlier attempt, I included numberOfResults in vectorSearchConfiguration and got an error message. Note that numberOfResults is currently unsupported.

Closing Thoughts

While writing this article, I noted some general observations in the terms of CLI/console usage:

  • Use of model id vs model ARN: some CLI commands use model id while others use model ARN
  • Some information can only be found in AWS console (eg. knowledge base id), others only via AWS CLI (eg. model ARN)
  • Inconsistent naming in CLI (eg. –retrieve-query vs –input) and error message (error message refers to numResults while actual field is numberOfResults)

Since ABKB is so new there are bound to be some rough edges here and there. None of these are showstoppers and I expect them to clear up over time as the service becomes more mature. For now do be aware as the service undergoes rapid development and updates.

Categories
ai cloud

Amazon Bedrock Knowledge Base: a first look

Amazon Bedrock is a fully managed service designed to simplify the development of generative AI applications – as opposed to Amazon SageMaker which provides services for machine learning applications. It offers access to a growing collection of foundation models from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon itself.

One of the latest offering under Amazon Bedrock is Amazon Bedrock Knowledge Base – which I shall refer to as ABKB. Essentially, ABKB is a simple way to do retrieval augmented generation (RAG). RAG is a technique to overcome some of the problems with foundation models (FM), such as not having up-to-date information and lack of knowledge of organization’s own data. Instead of retraining or fine tuning a FM with your own data, RAG allows existing FM to reference those data when responding to a query to improve accuracy and minimize hallucinations. In this article, I will go through the process of setting up ABKB and see how it can be used it in a sample application. But first, let’s look at the 2 ways you can use ABKB in a RAG application:

Custom RAG workflow is useful for cases where you want to have more control over the prompt augmentation process, or where you want to use FM which are not available in AWS. Here, you are only using AWS to generate embeddings – which is a technique to convert words into a numerical form – and to retrieve similar documents from user prompts.

In a fully managed RAG workflow, we will use AWS for all stages of the pipeline and this is what we will be doing.

Things to take note

As it is with new services, Amazon Bedrock is currently available only in limited regions. We will be using the US East (N. Virginia) region.

Note that you will need to login as an IAM user, not root user, to use ABKB. Suffice to say the IAM user will need to have sufficient permissions.

Request model access

Before you can use any Bedrock services, you will need to request for permission to the models that you want to use. This is done under the Model access option on the left panel.

There is no cost to request for model access so you might as well request for everything. The main reason for this step is to make sure you agree to the EULA for each model which will be different for different providers. Access to most models are automatically granted after request, except for Anthropic models, which you will need to provide a use case. You will need to do this because ABKB only supports Anthropic models at this point for its retrieve and generate API.

Create data source

ABKB takes data from a S3 bucket as its data source, so you will need to use or create a S3 bucket. Use the same region as the knowledge base if you are creating a new bucket.

There are some limitations to the size and type of documents supported. Mainly, documents should not exceed 50MB in size and it will only process text in supported documents (txt, md, html, doc, csv, xls, pdf).

Create knowledge base

There are 4 steps to create a knowledge base. Step 1 is straightforward and just involves filling in the name, description and IAM permissions – which you can leave as default.

In Step 2, you will need to specify the data source. By right you should be able to click on [Browse S3] and choose your bucket but it was not working for me. So enter the S3 URI manually if you need to. For chunking strategy you can leave it as default (300 tokens) or customize it according to your needs.

In Step 3, choose your embeddings model and vector database. You can choose between Amazon Titan or Cohere’s model for embedding. There are some articles that says Cohere’s models are superior. You can choose either and evaluate the performance. For vector database you can select Amazon OpenSearch Serverless, Aurora, Pinecone or Redis Enterprise Cloud. For development and testing, OpenSearch Serverless provides the cheapest option.

Step 4 is basically just a confirmation step. Click [Create knowledge base] to confirm. Note that it will take some time to provision the necessary resources after you click. While that is happening, do not close your browser tab. This is quite unusual as provisioning usually takes place in the background and there should not be a need to keep the frontend open, but that is not the case here.

Assuming all goes well, you will see a message to say that the knowledge base has been created successfully. You might have to wait a few more minutes for the vector database to fully index contents from the data source.

Test knowledge base

Now comes the fun part. you can now select your knowledge base and test it in the playground. You can configure the search strategy and model under configuration. Depending on your use case you might want to change the search strategy.

For model selection, Claude Instant provides the fastest response, but it does not perform as well for complex queries. I find almost no difference in Claude 2 and 2.1, but that is probably because my queries do not require a larger context window.

Sample responses

To test ABKB, I uploaded a 238 page employee user guide and use it to ask questions. The first one is a simple question.

Note that the response includes references to source chunks, which are relevant text that are extracted from the data source. You can also expand source chunk to see the actual text.

The second example is one where I asked follow-up questions.

I also tried to ask it something which is not in the document. To which it responded correctly that there is no such information.

Conclusion

Amazon Bedrock Knowledge Base provides an opinionated way to do RAG in a straightforward manner. The knowledge base you create is meant to be integrated into applications via AWS SDK. As it is fairly new at this stage, some rough edges are to be expected. Some of the issues encountered so far includes:

  • Model request UI not straightforward
  • Browse S3 not listing buckets unless they are already a data source
  • Provisioning requires staying on the page
  • Only Anthropic models available for response generation
  • New models like Anthropic Claude 3 not available Anthropic Claude 3 models are now available
  • Failure to create knowledge base happens sometimes

Despite the teething issues, ABKB seems like a useful service for organizations to create RAG applications easily within the AWS ecosystem and I am excited to see the addition of more features to enhance its functionality in the upcoming weeks/months.

Categories
ai

Year of the Dragon

In the spirit of the year of the dragon, I made 10 images using SDXL featuring dragons in different styles. Feel free to use it for any purpose.

Which is your favourite?

Categories
ai

A Man Sued Avianca Airline. His Lawyer Used ChatGPT. – The New York Times

This is what happens when somebody uses ChatGPT as if it’s a search engine. People are so used to precise and deterministic output from programs that it’s hard for them to imagine one that not only fabricates truths, but also does so convincingly.

The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court on Thursday, saying in an affidavit that he had used the artificial intelligence program to do his legal research — “a source that has revealed itself to be unreliable.”

Source: A Man Sued Avianca Airline. His Lawyer Used ChatGPT. – The New York Times

Categories
ai

ChatGPT Prompt Engineering for Developers – DeepLearning.AI

For a limited time only, this free course by Isa Fulford and Andrew Ng (Coursera, DeepLearning.ai), called ChatGPT Prompt Engineering for Developers, is available for anyone looking to expand their development skills. The course is an excellent opportunity for developers who want to learn how to use a large language model (LLM) to create powerful applications in a cost-effective and time-efficient way.

Throughout the course, Isa Fulford and Andrew Ng explain the workings of LLMs and provide best practices for prompt engineering. You’ll be able to learn how to use the OpenAI API to build capabilities that can automatically summarize user reviews, classify sentiment, extract topics, translate text, and even write emails. Additionally, you’ll learn how to build a custom chatbot and use two key principles for writing effective prompts.

What I appreciate about this course is the hands-on experience provided in the Jupyter notebook environment. You’ll be able to play with numerous examples and systematically engineer good prompts. This makes it easy to put the concepts learned in the course into practice in your own projects.

So, if you’re looking for an opportunity to upskill and learn how to build innovative applications that were once impossible or highly technical, I highly recommend taking this course. Don’t miss out on the chance to learn from experts and expand your skill set for free.

ChatGPT Prompt Engineering for Developers is beginner-friendly. Only a basic understanding of Python is needed. But it is also suitable for advanced machine learning engineers wanting to approach the cutting-edge of prompt engineering and use LLMs.

Source: ChatGPT Prompt Engineering for Developers – DeepLearning.AI

Categories
ai

ChatGPT limitations

People are often amused or surprised when ChatGPT fails to give a correct response for seemingly simple questions (eg. multiply two numbers), yet is able to answer very complex ones.

The way to think about ChatGPT and other LLM tools is that they are simply an assistant and not an oracle.

AI tools like ChatGPT have a mental model of the world, and try to imagine what would be the best answer for any given prompt. But they may not get it right all the time, and in times when they don’t have an answer they will try their best anyway (ie. fabricate one).

An assistant make mistakes, that’s why you should expect ChatGPT’s output to have mistakes.

That said, ChatGPT is really good in areas that don’t require precision (eg. creative writing).

Update (2023-02-01): ChatGPT has released a newer version that is supposed to have improved factuality and mathematical capabilities. Well, didn’t work for me.

The answer is 10365