Tasks overview

The loops identified previously provide an article-by-article overview of how automation is achieved through the collaboration of humans and algorithms. However, tasks can also be looked at beyond their division by article: here are collected all tasks grouped only by the actor that is performing them, either a human or an algorithm.

Tasks performed by humans

HumanTaskDescription
API DeveloperFeedback of generated sentencesAPI developer interacts with the system through a web UI to provide feedback on the generated sentences.
API DeveloperSelection of SentencesAPI developer selects appropriate sentences for training the intent classifier.
AnnotatorInput object classManually input the class of the object on the turntable.
AnnotatorVerify low-confidenceVerify low-confidence predictions through human annotation
AnnotatorProvide annotationsProvide annotations for low-confidence predictions
AnnotatorCreate accurate segmentationThe human quickly creates a highly accurate segmentation with the assistance of the algorithm.
AnnotatorManual AnnotationsManually annotate image patches with TIL positive or TIL negative labels for a subset of cancer types.
CSAProvide feedbackReceive feedback on a scale of 1-5 from the CSA.
Chatbot Trainer/DesignerTraining Data CreationHumans create training data by labeling utterances with intent categories used to train the NLU system
Chatbot Trainer/DesignerDrift DetectionHumans detect drift in the chatbot system by reviewing chat logs and identifying new topics or changes in User behavior, which is used to update the system
Chatbot Trainer/DesignerIntent Design Issue IdentificationHumans identify intent design issues and suggest improvements, ensuring the system's effectiveness
Chatbot Trainer/DesignerActionable Insights ReviewHumans review and confirm the actionable insights provided by the chatbot system, ensuring the suggested changes are appropriate before implementation
ClinicianInput Data AcquisitionObtain DICOM format MRI data from patients with high- and low-grade gliomas.
CuratorData PreparationCurate and label medical images for training and testing.
Decision MakerProvide feedbackThe Decision Maker provides feedback on the candidate solutions based on her preference.
Decision MakerProvide feedbackThe Decision Maker provides feedback on the new candidate solutions based on her preference.
Decision makerIdentify tasks/goalsHuman identifies the tasks or goals to be accomplished in a complex planning environment.
Decision makerEvaluate alerts/suggestionsHuman Decision maker evaluates the alerts and suggestions provided by RADAR and incorporates them into their decision-making process as they see fit.
DesignerHuman inputiPLAN accepts User guidance at every stage of the design process. This means that the User can provide input at different stages across a wide range of levels of detail.
Domain ExpertMake decisionsThe human Domain Experts use the explanations to make decisions.
Domain ExpertsData CollectionProvide input for accurate delineation of object boundaries
DriverProvide dataProvide data to the algorithm through a driving simulation.
ExpertEdit learned rulesExperts can directly edit the learned rules generated by the model to improve its interpretability and accuracy.
ExpertAnnotate predicted valuesExperts can annotate observations/predicted values to help identify errors or areas where the model needs improvement.
ExpertInteract with Network PredictionsHuman Experts interact with the network's predictions and correct any inaccuracies in the annotations.
ExpertReview and Provide FeedbackHuman Experts review the updated network predictions and provide further feedback, initiating additional training iterations if necessary.
ExpertUncertainty EstimationEvaluate the uncertainty metrics (aleatoric uncertainty, epistemic uncertainty, entropy, and mutual information) to understand the model's confidence in its predictions.
ExpertUncertainty ThresholdingDefine and validate the threshold τ based on domain knowledge and the desired balance between automated decision-making and human intervention.
ExpertAutomated ReferralReview flagged cases, leveraging their Expertise to assess and make decisions based on the uncertain predictions provided by the model.
ExpertHuman-in-the-Loop InterventionProvide feedback and corrective actions based on the model's uncertain predictions, ensuring that critical cases are appropriately flagged for further assessment.
ExpertsElicit samplesIn this step, we ask Experts to describe a student’s thought process, enumerating strategies to get to a right or wrong answer. Given a detailed enough description, we can use it to label indefinitely. These labels will be noisy but the quantity should make up for any uncertainty.
ExpertsDebugging and MonitoringMechanisms are established to debug and monitor the behavior of AI systems, allowing for transparency, fairness, and accountability in the governance of autonomous machines.
GeneralActing in the environmentHumans act in the environment on par with agents to ensure safe exploratory actions in sensitive contexts like autonomous driving.
GeneralProviding rewardsHumans provide rewards for several learning algorithms, for example in the context of evaluating machine-generated dialogues, summaries, semantic parsers, natural language, machine translation, and many others.
GeneralGenerating tasksHumans generate tasks for agents to achieve.
InstitutionsProgramming the Algorithmic Social ContractInstitutions and tools are developed to program the algorithmic social contract between humans and governance algorithms, ensuring that AI systems align with societal values and norms.
Knowledge workerDetermine answerability and assign tagsWorkers determine whether an answer to the question can be provided at all, and place the question in different queues based on this criterion. Workers assign finer-grained tags to questions and iterate over tags already assigned to prior questions as new ones come in.
Knowledge workerAccess document and provide answerFor questions about document content, workers manually access the document using the share link and their personal credentials. Workers copy and paste the question and document content into a custom UI front-end for the ML Q&A model. If the AI-provided answer is unsatisfactory upon, workers provide a human response.
OccupantsModify behaviorThe Occupants modify their behavior based on the feedback and suggestions provided by the algorithms.
OccupantsEarn pointsThe Occupants earn points based on their energy-saving behavior.
OccupantsCompete with othersThe points earned by Occupants can be used to compete with other Occupants in the building.
OncologistClinical Decision Support and Research AnalysisUtilize the quantitative tumor measurements for personalized treatment planning and response assessment. Leverage the system to streamline data curation, model prototyping, and standardized dataset creation for research collaborations.
OperatorProvide FeedbackThe human operator provides feedback by providing ground truth annotations based on the robot's requests.
ParticipantSubmit question and share linkParticipants submit questions and share links to documents through a Microsoft Word add-in.
ParticipantReceive answerThe system returns the answer to the participant through the Microsoft Word add-in.
PathologistsValidation of AnnotationsAssess representative samples provided by the dataset
Play ExpertGenerate SynopsisGenerate a synopsis of the play using various options, such as play background/setting from play databases, more detailed synopses from fan websites, or scenic remarks extracted from texts of plays themselves.
Play ExpertGenerate Character ListGenerate a list of characters based on the synopsis.
PlayerForm teamThe human Player forms a team with a computer teammate.
PlayerIn chargeThe human Player is in charge of the team.
PlayerBuzz and answerAt any point before the question is fully read, the human can decide to buzz, interrupt the readout, and provide an answer.
PlayerProviding Behavior LabelsPlayers interact with the system by providing labels for behaviors, indicating the time and spatial contexts of their actions within the game environment.
PlayerInteractive CorrectionPlayers engage with the system to interactively correct the computational model's probabilities as well as nodes that make up the graphical model, contributing to a more accurate representation of Player cognition.
PlayerProviding Insights and FeedbackPlayers provide insights and feedback that contribute to the development of AI agents acting as intelligent tutoring systems for esports, enabling personalized coaching and gameplay experiences.
Policymakers/ethicistsInvolving Policymakers and EthicistsPolicymakers and ethicists play a crucial role in overseeing the implementation of the algorithmic social contract, ensuring that AI systems operate in accordance with societal expectations.
Policymakers/publicAdapting to Human ValuesNew metrics and methods are developed to evaluate AI behavior against quantifiable human values, enabling Policymakers and the public to articulate their expectations to machines.
RadiologistData Classification ReviewReview the classification results and ensure the accuracy of sequence identification.
ResearcherData Import and VisualizationImport dataset into AstronomicAL and visualize and integrate data from different sources using customizable domain-specific plots.
ResearcherCustom Model and Query StrategyAdapt AstronomicAL for research to allow for domain-specific plots, novel query strategies, and improved models. Customize models and query strategies to improve performance.
ResearcherProvide InstructionsProviding natural language instructions to the algorithm for task completion.
ResearcherCollaborate and Provide FeedbackCollaborating with the algorithm to provide feedback and guidance during the training process.
ResearchersPerformance BenchmarkingTest their developed models on the dataset
StakeholdersNegotiating Values and GoalsHumans and Stakeholders negotiate the values and goals that AI systems should strive towards, considering trade-offs between different societal interests and ethical considerations.
UserNatural InteractionUsers interact with the system to explore and adapt distant inspirations found in the scientific papers, leveraging the analogical similarities identified by the algorithms.
UserConverse with ChatbotThe User initiates a conversation with the Chatbot
UserProviding FeedbackThe User provides feedback to the algorithm by providing a terminal reward upon succeeding or failing at the task.
UserCorrectionIf any part in the predicted contour is inaccurate, Users can correct them by drawing line segments.
UserReview matches and provide feedbackReviews the matches and provides feedback on any incorrect matches.
UserReview predicted types and provide feedbackReviews the predicted semantic types and provides feedback on any incorrect predictions.
UserReview inferred functions and provide feedbackReviews the inferred labeling functions and provides feedback on any incorrect inferences.
UserReview new training data and provide feedbackReviews the new training data and provides feedback on any incorrect data.
UserReview local model performance and provide feedbackReviews the performance of the local model and provides feedback on any incorrect predictions.
UserReview predictions and provide feedbackReviews the predictions and provides feedback on any incorrect predictions.
UserReviewing ExplanationsThe human reviews the explanations and considers the suggested changes to their model.
UserUpdating ModelBased on the explanations and suggested changes, the human updates their model to align it with the AI system's model.
UserInput Text InstructionsProvides natural language text instructions describing the desired scene or modifications to an existing sketch.
UserUser Feedback and RefinementReviews the generated sketches and provides feedback or additional instructions for refinement.
evaluatorProvide Feedback on Generated QuestionsInteracting with the algorithm to provide feedback on the effectiveness of the generated clarifying questions.

Tasks performed by algorithms

TaskDescription
AI Model TrainingTrain deep neural network models, such as VGG 16-layer network and Inception-V4, using the combined dataset to Generalize across multiple cancer types.
Active Learning and LabelingUtilize active learning techniques to prioritize data that offer high information gain and explore each data point chosen, injecting Domain Expertise directly into the training process to ensure accurate and reliable labels.
Adapt to Human ValuesAlgorithms must adapt to human values, as new metrics and methods are developed to evaluate their behavior against quantifiable human values.
Algorithmic ProcessingThe fully automated system employs advanced machine learning algorithms to process a large corpus of scientific papers.
Analyze dataThe algorithms analyze the data to predict the energy consumption of each resource based on the behavior of Occupants.
Assigning scoresAssign a score to each generated phenotype image based on its similarity to the examples in the database
Automated ReferralFlag cases with high uncertainty, surpassing the defined threshold τ, for referral to medical professionals.
Automatic ApplicationAutomatically apply the trained AI model to each cancer type without human adjustment, providing TIL prediction results.
Automatic ComputationThe AI system utilizes algorithms to automatically compute explanations that suggest changes to the human's model.
Autonomous LearningThe collaboration aims to reduce the burden on the human operator and enhance the robot's autonomous learning capabilities.
Balancing User CommandsThe algorithm balances the need to follow User commands closely while also deviating from the User's actions when they are suboptimal, discarding actions whose values fall below some threshold and selecting the remaining action closest to the User's input.
Calculate cosine similarityUses an embedding of the entire table to calculate the cosine similarity between the column names and semantic types.
Choose modelChoose one of the models among 'Search', 'Hand curated answers' etc.
Collect dataThe algorithms collect data from IoT sensors and cyber-physical systems sensing/actuation platforms.
Comply with the Algorithmic Social ContractAlgorithms must comply with the algorithmic social contract that is programmed and monitored by Institutions and tools, ensuring that they operate in accordance with societal expectations.
Confidence level calculationCalculate confidence levels for predicted image categories
Conversation NavigationThe dialogue system navigates the conversation to its successful completion using rule-based graph and learning elements
Convert Predictions for DisplayThe network's predictions are converted back to a format for display in WSI viewing software, such as Aperio ImageScope.
Create annotationsCreate annotations for instance-aware semantic segmentation of reasonable quality with minimal effort.
Data Classification and PreprocessingClassify MRI sequences using an ensemble of NLP and CNN models. Preprocess the MRI data, including image registration, skull stripping, and bias field correction.
Data CollectionUtilize input from Domain Experts to collect accurate annotations at scale
Define performance scoresDefining scores for overall, vehicle-trajectory, and handling performance based on the outcome of the optimizers.
Draw attention to regionThe algorithm draws the human's attention to that region of the recording.
Evaluate maximum possible force of each tireEvaluating the maximum possible force of each tire independently using the optTire optimizer.
Extract passages from documentsThe system uses an AI model to extract passages from documents that may contain answers to the questions.
Extraction of Action Phrases and Initiation of Sentences from API DescriptionAlgorithm extracts phrases from OpenAPI specification and possibly augments them with human-provided phrases.
Feedback LoopAs Users interact with the system and explore the retrieved scientific papers, their interactions and feedback can be used to further refine the algorithms and improve the relevance and accuracy of the results over time.
Filtering and SelectionAlgorithm filters the generated sentences to eliminate noisy ones and selects a diverse set of sentences based on human feedback.
Finding visual similarityUse CNN classifier to find visual similarity between generated phenotype images and database of examples
Fine-tune Based on Human FeedbackThe network is fine-tuned based on human feedback, leading to improved accuracy and reduced burden of manual WSI annotation.
Fine-tune Language ModelFine-tune the LM to theatre plays to see how far this approach can go.
Generate Final TextGenerate the final text using a similar approach as the base LM generation.
Generate candidate solutionsThe EMO algorithm generates a set of candidate solutions.
Generate draft segmentationThe machine learning algorithm generates a draft segmentation of the video.
Generate new solutionsThe EMO algorithm generates a new set of candidate solutions based on the guidance from the LTR model.
Generate new training dataGenerates new training data using data programming and the inferred labeling functions.
Generate plan/COAAutomated planning system (RADAR) receives the model of the world, initial state, and goals/tasks from the human Decision maker. RADAR's automated planner analyzes the provided information and generates a plan or course-of-action (COA) based on the given model and goals.
Generate verbal explanationsBased on difficult-to-formalize experiential prior knowledge from human Domain Experts
Generate visual explanationsThe system provides transparent explanations for the results to the human Domain Experts.
Generating curriculumIf no human input is possible, a different AI agent can be used to generate the curriculum.
Guide EMO algorithmThe LTR model is applied to guide the EMO algorithm towards the SOI.
Human-in-the-Loop InterventionIncorporate human feedback to improve the model's performance and refine the uncertainty thresholding process.
Identify images with tortoisesThe algorithm identifies at least one image containing tortoises per ground truth segment.
Identifying DifferencesThe AI system identifies differences between its own model and the human's model.
Image predictionPredict categories of images using deep learning models trained from previous periods
Implicit InferenceThe algorithm learns to assist the User without access to private information, implicitly inferring it from the User's input.
ImprovementCTN consistently improves with more human corrections, potentially achieving better performance than fully supervised methods with considerably less annotation efforts.
Incorporate Societal ValuesAlgorithms must incorporate the values and goals negotiated by humans and Stakeholders, considering trade-offs between different societal interests and ethical considerations.
Incorporating Inferred GoalsThe algorithm is capable of incorporating inferred goals into the agent's observations when the goal space and User model are known, further improving sample efficiency.
IncorporationCTN formats these corrections as partial contours and incorporates them back into the training via an additional Chamfer loss.
Infer labeling functionsInfers labeling functions from data used to generate new training data and prediction functions.
InferenceOnce we have elicited samples from the Expert prior, we can use deep learning techniques to infer the correct label for a given input x. This involves training a deep model on the labeled data and using it to predict labels for new inputs.
InitializationCTN takes the exemplar contour as an initialization and gradually evolves it to minimize the weighted loss for each unlabeled image.
Integrate feedbackFrom the human Domain Experts to improve its performance
Integrating Labeled BehaviorsThe algorithm integrates the labeled behaviors provided by Players into the computational models of esports Players, incorporating qualitative and quantitative inputs.
Intent ClassificationThe NLU system classifies User queries into intent categories using natural language understanding techniques
Issue Identification and Response DeliveryThe chatbot system identifies challenges in providing a good response and may hand off the conversation to a human agent or raise a ticket for human intervention, aiming to provide consistent and suitable responses while delivering a high-quality customer experience
Iterative ImprovementThe curiosity agent continues to select actions and seek out new information, iteratively improving the robot's learning process.
Learn preferenceThe LTR neural network learns the Decision Maker's preference based on the feedback.
LearningThe algorithm uses deep reinforcement learning with neural network function approximation to learn an end-to-end mapping from observation and input to agent action values, with task reward as the only form of supervision.
Learning from human demonstrationsAgents learn from human demonstrations under the imitation learning (IL) paradigm when it is challenging to design a reward function or when the reward function could be sparse, thus making it hard for an RL agent to learn.
Learning from human-generated tasksAgents learn from the tasks generated by humans.
Machine Learning-driven InterpretationUtilizes deep learning models to interpret the natural language input and understand the User's instructions.
Make predictions on new dataUses the local model to make predictions on new data.
Match column names to type ontologyMatches each column name to the labels in the type ontology using syntactic and semantic matching.
Model DeploymentTrained intent classifier is deployed and serves to classify Users’ intents based on their input utterances in the dialog system while attempting to invoke an API endpoint conversationally.
Model TrainingAlgorithm trains the intent recognition model for the chatbot.
Model updatesInclude pseudo-labels in final data set for further model updates or ecological analyses
Monitor planning processThe automated planner continuously monitors the planning process of the human Decision maker and the current state of the environment.
Natural Language GenerationAlgorithm generates equivalent sentences using a variety of language models.
Operate TransparentlyAlgorithms must operate transparently, allowing for debugging and monitoring of their behavior to ensure that they align with the algorithmic social contract.
Optimize energy consumptionThe algorithms optimize the energy consumption of the building as a whole.
Optimize vehicle acceleration in CogOptimizing the vehicle's acceleration in the center of gravity (Cog) using the optCog optimizer.
Optimizing Plans and BehaviorWith the reconciled model, the AI system optimizes its plans and behavior to be in line with the updated human model.
Outcome Assessment and ReportingProvide segmentation masks for longitudinal tumor tracking and quantitative growth assessment. Generate standardized reports and visualizations based on the processed MRI data.
Performance BenchmarkingProvide performance benchmarks to encourage the development of accurate and interpretable downstream models for the computational analysis of H&E stained colon tissue
Present candidate solutionsThe consultation module presents a set of selected candidate solutions to the Decision Maker.
Present new candidate solutionsThe consultation module presents the new set of candidate solutions to the Decision Maker.
Presenting ExplanationsThe AI system presents these explanations to the human, highlighting the suggested changes to the human's model.
Private Experimentation and CollaborationRun AstronomicAL entirely locally on the User’s system, providing a private space to experiment. Export a simple configuration file to share entire layouts, models, and assigned labels with the community, allowing for complete transparency and effortless reproduction of results.
Process Whole Slide ImagesThe algorithm processes whole slide images using a semantic segmentation network.
Processing Observed BehaviorsThe algorithm processes observed behaviors and generates data representations of Player actions within the game environment.
Producing phenotype imagesGenerate a set of phenotype images
Propose machine learning model architectureProposing a machine learning model architecture to directly determine the scores from the initial data.
Provide alerts/suggestionsBased on the current state and resource availability, RADAR's automated planner provides alerts and suggestions to the human Decision maker regarding potential drawbacks in the plan, resource constraints, or problems that may arise in the future.
Provide answerProvide the answer which is observed by the CSA (care Agent).
Provide feedbackOnce we have inferred the correct label for a given input x, we can provide feedback to the student based on their performance. This feedback can be associated with specific parts of a student's solution and can articulate their misconceptions in the language of the instructor.
Provide feedbackThe algorithms provide feedback to Occupants about their energy usage.
Provide interpretationsThe interpretations provided by the computer should help the human better decide whether to trust the computer's prediction or not.
Providing FeedbackThe algorithm provides real-time action feedback to the User based on the learned mapping.
Purpose MatchingAlgorithms are utilized to emulate human judgment of purpose match, ensuring that the system finds partial purpose matches in the top results.
Ranking phenotype imagesRank the generated phenotype images in order of aesthetic quality based on the assigned scores
Re-training on artist-specific datasetsIncrease accuracy in automating personal aesthetic judgement
Reformulate NLP TaskTackling the generation of clarifying questions for truly interactive agents.
Request Ground TruthThe curiosity agent may request ground truth annotations from the human operator when additional information is needed.
Restrict GenerationRestrict the generation by enforcing that only certain predetermined characters speak, possibly in a pregenerated order. This can be achieved by stopping the generation.
Retrain with Corrected AnnotationsThe corrected annotations provided by human Experts are used to retrain the semantic segmentation network.
Reward Function DecompositionThe algorithm decomposes the agent's reward function into known terms computed for every state and a terminal reward provided by the User, enabling the system to learn efficiently from a dense reward signal that captures Generally useful behaviors and adapt to individual Users through feedback.
Route questions to appropriate respondentsThe system routes questions to the appropriate respondents, either the AI model or human Knowledge workers.
Seed Language ModelSeed a language model (LM) with a prompt that is the beginning of a dramatic situation.
Segment object from backgroundAutomatically segment the corresponding object from the background using basic image processing techniques.
Select ActionsThe curiosity agent selects actions to navigate the robot within the exploration space.
Semi-Automatic AnnotationsApply a classification algorithm to Whole Slide Images (WSIs) and adjust the predicted TIL probability maps by applying thresholds to generate semi-automatic annotations.
Simplify Data CollectionUsing a single-turn data collection strategy to increase the speed of data collection.
Sketch GenerationGenerates sketched scenes based on the interpreted text instructions, using state-of-the-art deep neural network architectures.
Speed Up Training EnvironmentUsing a new gridworld environment for fast and scalable experiments.
Suggest ways to improve behaviorThe algorithms suggest ways to improve the energy-saving behavior of Occupants.
Test Set Creation and ValidationCurate a labeled test set to demonstrate the validity and Generalizability of the model. Mark any example as unsure, ensuring that all training data are of high quality.
Train local modelTrains a local model on the new training data.
Training Data GenerationCombine manually annotated patches and semi-automatically annotated patches to form the training set for the AI model.
Training on database of examplesLearn visual features important for aesthetic evaluation
Tumor Segmentation and Feature ExtractionSegment tumor tissue subtypes using CNNs to generate quantitative tumor measurements. Optionally allow Expert-in-the-loop manual refinement of segmentation results.
Uncertainty EstimationGenerate stochastic predictions using Monte Carlo Dropout, capturing aleatoric and epistemic uncertainties in the segmentation outputs.
Uncertainty ThresholdingApply an uncertainty threshold (τ) to the model's predictions, determining the level of uncertainty for each case.
Update guessesThe computer periodically updates its guesses and interpretations (every 4 words in the experiments described in the PDF).
Update the modelThe model is updated based on the feedback provided by the Experts, and the learning process continues iteratively.
Update weightsUpdate its weights based on this feedback.
User Feedback and RefinementIncorporates User feedback and iteratively refines the sketches based on the provided instructions.
User InterfaceThe system presents the top results to the Users through an intuitive and User-friendly search interface.
Utilize FeedbackThe robot uses the feedback to improve its performance and adjust its exploration strategy.
Utilizing Corrected ModelThe algorithm utilizes the corrected computational model to refine the understanding of Player intents, strategies, and tactics within the gaming environment.
Validation of AnnotationsCompute quantitative concordance statistics between pathologists and the dataset to ensure the accuracy of the annotations