Which of the following statements is true of TensorFlow?
A. TensorFlow is a scalable and single-platform programming interface for implementing and running machine learning algorithms, including convenience wrappers for deep learning.
B. Although able to run on other processing platforms, TensorFlow 2.0 is not yet able to run on Graphical Processing Units (or GPU's).
C. Although able to run on other processing platforms, TensorFlow 2.0 is not yet able to run on Tensor Processing Units (or TPU's).
D. TensorFlow is a scalable and multi-platform programming interface for implementing and running machine learning algorithms, including convenience wrappers for deep learning.
How does TensorFlow represent numeric computations?
A. Using a Directed Acyclic Graph (or DAG)
B. Flow chart
C. Both Using a Directed Acyclic Graph (or DAG) and Flow chart
D. None of the options are correct
Which are useful components when building custom Neural Network models?
A. tf.losses
B. tf.metrics
C. tf.optimizers
D. All of the options are correct.
Which API is used to build performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.
A. tf. Tensor
B. tf.data.Dataset
C. tf.device
What operations can be performed on tensors?
A. They can be reshaped
B. They can be sliced
C. They can be both reshaped and sliced
D. None of the options are correct.
Which of the following is true when we compute a loss gradient?
A. TensorFlow records all operations executed inside the context of a tf. GradientTape onto a tape.
B. It uses tape and the gradients associated with each recorded operation to compute the gradients.
C. The computed gradient of a recorded computation will be used in reverse mode differentiation.
D. All options are correct.
What are distinct ways to create a dataset?
A. A data source constructs a Dataset from data stored in memory or in one or more files.
B. A data transformation constructs a dataset from one or more tf.data.Dataset objects.
C. A data source constructs a Dataset from data stored in memory or in one or more files and a data transformation constructs a dataset from one or more tf.data. Dataset objects.
Which of the following is true about embedding?
A. An embedding is a weighted sum of the feature crossed values.
B. Embedding is a handy adapter that allows a network to incorporate sparse or categorical data.
C. The number of embeddings is the hyperparameter to your machine learning model.
What is the use of tf.keras.layers. TextVectorization?
A. It performs feature-wise normalization of input features.
B. It turns continuous numerical features into bucket data with discrete ranges.
C. It turns raw strings into an encoded representation that can be read by an Embedding layer or Dense layer.
D. It turns string categorical values into encoded representations that can be read by an Embedding layer or Dense layer.
Which of the following is not a part of Categorical features preprocessing?
A. tf.keras.layers. Category Encoding
B. tf.keras.layers.Hashing
C. tf.keras.layers.IntegerLookup
D. tf.keras.layers. Discretization
D. tf.keras.layers.Discretization
Which of the following layers is not non-trainable?
A. Discretization
B. Hashing
C. Normalization
D. StringLookup
When should you avoid using the Keras function adapt()?
A. When working with lookup layers with very large vocabularies
B. When using TextVectorization while training on a TPU pod
C. When using StringLookup while training on multiple machines via ParameterServerStrategy
D. When working with lookup layers with very small vocabularies
Which of the following is a part of Keras preprocessing layers?
A. Image data augmentation
B. Image preprocessing
C. Numerical features preprocessing
Non-linearity helps in training your model at a much faster rate and with more accuracy without the loss of your important information?
A. True
B. False
During the training process, each additional layer in your network can successively reduce signal vs. noise. How can we fix this?
A. Use non-saturating, linear activation functions.
B. Use non-saturating, nonlinear activation functions such as ReLUs.
C. Use sigmoid or tanh activation functions.
How does Adam (optimization algorithm) help in compiling the Keras model?
A. By updating network weights iteratively based on training data
B. By diagonal rescaling of the gradients
C. Both by updating network weights iteratively based on training data by diagonal rescaling of the gradients
The predict function in the tf.keras API returns what?
A. Numpy array(s) of predictions
B. Input_samples of predictions
C. Both numpy array(s) of predictions & input_samples of predictions
What is the significance of the Fit method while training a Keras model?
A. Defines the number of steps per epochs
B. Defines the number of epochs
C. Defines the validation steps
D. Defines the batch size
Select the correct statement regarding the Keras Functional API.
A. Unlike the Keras Sequential API, we do not have to provide the shape of the input to the model.
B. Unlike the Keras Sequential API, we have to provide the shape of the input to the model.
C. The Keras Functional API does not provide a more flexible way for defining models.
The Keras Functional API can be characterized by having:
A. Multiple inputs and outputs and models with shared layers.
B. Single inputs and outputs and models with shared layers.
C. Multiple inputs and outputs and models with non-shared layers.
How does regularization help build generalizable models?
A. By adding dropout layers to our neural networks
B. By using image processing APIS to find out accuracy
C. By adding dropout layers to our neural networks and by using image processing APIS to find out accuracy
The L2 regularization provides which of the following?
A. It subtracts a sum of the squared parameter weights term to the loss function.
B. It multiplies a sum of the squared parameter weights term to the loss function.
C. It adds a sum of the squared parameter weights term to the loss function.
Fill in the blanks. When sending training jobs to Vertex AI, it is common to split most of the logic into a _______ and a _______ file.
A. task.py, model.py
B. task.json, model.json
C. task.avro, model.avro
D. task.xml, model.xml
Which file is the entry point to your code that Vertex AI will start and contains details such as "how to parse command-line arguments and where to write model outputs?
A. model.py
B. tmodel.json
C. tmodel.avro
D. task.py
When you package up a TensorFlow model as a Python Package, what statement should every Python module contain in every folder?
D. an init_.py
To make your code compatible with Verte AI, there are three basic steps that must be completed in a specific order. Choose the answer that best describes those steps.
A. First, upload data to Google Cloud Storage. Then submit your training job with gcloud to train on Vertex AI. Next, move code into a trainer Python package.
B. First, download data from Google Cloud Storage. Then submit your training job with gcloud to train on Vertex AI. Next, move code into a trainer Python package.
C. First, upload data to Google Cloud Storage. Next, move code into a trainer Python package. Then submit your training job with gcloud to train on Vertex AI.
D. First, move code into a trainer Python package. Next, upload data to Google Cloud Storage. Then submit your training job with gcloud to train on Vertex AI.
Fill in the blanks. You can use either pre-built containers or custom containers to run training jobs. Both containers require you specify settings that Vertex AI needs to run your training code, including _______ and _______.
A. Source distribution name, job name, worker pool
B. Region, source distribution, custom URI
C. Region, display-name, worker-pool-spec
D. Cloud storage bucket name, display-name, worker-pool-spec
Last changed5 months ago