Transformer and BERT

What are the three different embeddings that are generated from an input sentence in a Transformer model?

Token, segment, and position embeddings

2.What kind of transformer model is BERT?

Encoder-only model

3.What is the name of the language modeling technique that is used in Bidirectional Encoder Representations from Transformers (BERT)?

Transformer

4.What does fine-tuning a BERT model mean?

Training the model and updating the pre-trained weights on a specific task by using labeled data

5.What are the encoder and decoder components of a transformer model?

The encoder ingests an input sequence and produces a sequence of hidden states. The decoder takes in the hidden states from the encoder and produces an output sequence.

6.What are the two sublayers of each encoder in a Transformer model?

Self-attention and feedforward

7.What is the attention mechanism?

A way of determining the importance of each word in a sentence for the translation of another sentence

8.What is a transformer model?

A deep learning model that uses self-attention to learn relationships between different parts of a sequence.

Attention Mechanism

What is the name of the machine learning technique that allows a neural network to focus on specific parts of an input sequence?check

Attention mechanism

How does an attention model differ from a traditional model?

Attention models pass a lot more information to the decoder.

What is the name of the machine learning architecture that can be used to translate text from one language to another?check

Encoder-decoder

What is the advantage of using the attention mechanism over a traditional recurrent neural network (RNN) encoder-decoder?

The attention mechanism lets the decoder focus on specific parts of the input sequence, which can improve the accuracy of the translation.

What is the advantage of using the attention mechanism over a traditional sequence-to-sequence model?

The attention mechanism lets the model focus on specific parts of the input sequence.

What is the purpose of the attention weights?check

To assign weights to different parts of the input sequence, with the most important parts receiving the highest weights.

What are the two main steps of the attention mechanism?

Calculating the attention weights and generating the context vector

Encoder-Decoder Architecture

.

What is the purpose of the decoder in an encoder-decoder architecture?

 -To generate the output sequence from the vector representation

 -To predict the next word in the output sequence

What is the purpose of the encoder in an encoder-decoder architecture?

-To convert the input sequence into a vector representation


What are two ways to generate text from a trained encoder-decoder model at serving time?


-Greedy search and beam search

What is the difference between greedy search and beam search?

-Greedy search always selects the word with the highest probability, whereas beam search considers multiple possible words and selects the one with the highest combined probability.

What is the name of the machine learning architecture that takes a sequence of words as input and outputs a sequence of words?

-Encoder-decoder

Responsible AI

Which of the below is one of Google’s 7 AI principles?

AI should uphold high standards of scientific excellence.

Why is responsible AI practice important to an organization?

Responsible AI practice can help build trust with customers and stakeholders.

Organizations are developing their own AI principles that reflect their mission and values. What are the common themes among these principles?

A consistent set of ideas about transparency, fairness, accountability, and privacy.

Which of these is correct with regard to applying responsible AI practices? Decisions made at all stages in a project make an impact on responsible AI

Generative AI and LLMs


What is Generative AI?:

checkGenerative AI is a type of artificial intelligence (AI) that can create new content, such as text, images, audio, and video. It does this by learning from existing data and then using that knowledge to generate new and unique outputs.

Hallucinations are words or phrases that are generated by the model that are often nonsensical or grammatically incorrect. What are some factors that can cause hallucinations? Select three options.


The model is not given enough context.

The model is trained on noisy or dirty data.

The model is not trained on enough data

What are foundation models in Generative AI?

A foundation model is a large AI model pretrained on a vast quantity of data that was “designed to be adapted” (or fine-tuned) to a wide range of downstream tasks, such as sentiment analysis, image captioning, and object recognition.

What is a prompt?

A prompt is a short piece of text that is given to the large language model as input, and it can be used to control the output of the model in many ways.

What is an example of both a generative AI model and a discriminative AI model?

A generative AI model could be trained on a dataset of images of cats and then used to generate new images of cats. A discriminative AI model could be trained on a dataset of images of cats and dogs and then used to classify new images as either cats or dogs

What are large language models (LLMs)?:

An LLM is a type of artificial intelligence (AI) that can generate human-quality text. LLMs are trained on massive datasets of text and code, and they can be used for many tasks, such as writing, translating, and coding.

What are some of the challenges of using LLMs? Select three options.

They can be biased.

They can be used to generate harmful content.

They can be expensive to train.

What are some of the applications of LLMs?

LLMs can be used for many tasks, including:

  1. Writing
  2. Translating
  3. Coding
  4. Answering questions
  5. Summarizing text
  6. Generating creative content

What are some of the benefits of using large language models (LLMs)?

LLMs have many benefits, including:

  1. They can generate human-quality text.
  2. They can be used for a variety of tasks.
  3. They can be trained on massive datasets of text and code.
  4. They are constantly improved.

2023 Data Science Survey: Please Participate & Help spread the word

Rexer Analytics has been conducting the Data Science Survey since 2007. Each survey explores the analytic behaviors, views, and preferences of data scientists and analytic professionals. This year we are excited to work with Eric Siegel and his Machine Learning Week organization to design, promote, and analyze this Data Science Survey.

Summary reports from previous surveys are available FREE to download from the Rexer Analytics website.

Karl Rexer and Eric Siegel will present preliminary highlights of the 2023 survey results at the Machine Learning Week conference in Las Vegas in June 2023, and the presentation will be posted online shortly afterward. Come join us! A full summary report will be available for download from the Rexer Analytics website later in 2023.

Movie Review -The Batman is the worst Batman movie sans Ben Affleck

We have seen Batman movies before. The Gold Standard of the Nolan Trilogy. The original series of Keaton-Kilmer-Clooney (Clooney was terrible too)

Ben Affleck was a bad Batman , but better than Daredevil

This one is an attempt to make Batman the Greatest Detective some angst and brooding. It fails

All you have is a wasted effort and a long movie

I could be a better Batman. Its like that