跳转到主要内容
Chinese, Simplified

category

人工智能代理正迅速成为企业转型的下一个前沿,82%的组织计划在未来3年内采用。根据凯捷对大型企业1100名高管的调查,10%的组织已经在使用人工智能代理,超过一半的组织计划在明年使用它们。最近发布的DeepSeek-R1模型为开源社区带来了最先进的推理能力。组织可以使用这些推理模型构建代理应用程序,以执行具有高级决策能力的复杂任务,从而提高效率和适应性。

在这篇文章中,我们将深入探讨组织如何使用Amazon SageMaker AI,这是一种完全托管的服务,允许您大规模构建、训练和部署ML模型,并可以使用CrewAI构建AI代理,CrewAI是一种流行的代理框架和DeepSeek-R1等开源模型。

代理设计与传统软件设计

与传统软件相比,代理系统提供了一种根本不同的方法,特别是在处理复杂、动态和特定领域挑战的能力方面。与依赖于基于规则的自动化和结构化数据的传统系统不同,由大型语言模型(LLM)驱动的代理系统可以自主运行,从环境中学习,并做出微妙的、感知上下文的决策。这是通过模块化组件实现的,包括推理、记忆、认知技能和工具,使他们能够执行复杂的任务并适应不断变化的场景。

传统的软件平台虽然对常规任务和横向扩展有效,但往往缺乏代理系统提供的特定领域的智能和灵活性。例如,在制造环境中,传统系统可能会跟踪库存,但缺乏预测供应链中断或使用实时市场洞察力优化采购的能力。相比之下,代理系统可以处理实时数据,如库存波动、客户偏好和环境因素,以便在中断期间主动调整策略和重新路由供应链。

企业应战略性地考虑在适应性和特定领域专业知识至关重要的场景中部署代理系统。例如,考虑客户服务。传统的聊天机器人仅限于对预期客户查询的预编程响应,但人工智能代理可以使用自然语言与客户互动,提供个性化帮助,并更有效地解决查询。人工智能代理可以通过自动化重复性任务,如生成报告、电子邮件和软件代码,显著提高生产率。代理系统的部署应侧重于具有明确成功指标的明确流程,以及流程管理中具有更大灵活性和更低脆弱性的潜力。

DeepSeek-R1


在这篇文章中,我们将向您展示如何在SageMaker上部署DeepSeek-R1,特别是将Llama-70b蒸馏变体DeepSeek-R1-Distill-Lama-70b部署到SageMaker实时端点。DeepSeek-R1是由人工智能初创公司DeepSeek开发的高级LLM。它采用强化学习技术来增强其推理能力,使其能够执行复杂的任务,如数学问题解决和编码。要了解更多关于DeepSeek-R1的信息,请参阅Amazon Bedrock Marketplace和Amazon SageMaker JumpStart中提供的DeepSeek-R2模型,并深入了解构建DeepSeek-R3背后的理论。

Generative AI on SageMaker AI

SageMaker AI, a fully managed service, provides a comprehensive suite of tools designed to deliver high-performance, cost-efficient machine learning (ML) and generative AI solutions for diverse use cases. SageMaker AI empowers you to build, train, deploy, monitor, and govern ML and generative AI models through an extensive range of services, including notebooks, jobs, hosting, experiment tracking, a curated model hub, and MLOps features, all within a unified integrated development environment (IDE).

SageMaker AI simplifies the process for generative AI model builders of all skill levels to work with foundation models (FMs):

  • Amazon SageMaker Canvas enables data scientists to seamlessly use their own datasets alongside FMs to create applications and architectural patterns, such as chatbots and Retrieval Augmented Generation (RAG), in a low-code or no-code environment.
  • Amazon SageMaker JumpStart offers a diverse selection of open and proprietary FMs from providers like Hugging Face, Meta, and Stability AI. You can deploy or fine-tune models through an intuitive UI or APIs, providing flexibility for all skill levels.
  • SageMaker AI features like notebooks, Amazon SageMaker Training, inference, Amazon SageMaker for MLOps, and Partner AI Apps enable advanced model builders to adapt FMs using LoRA, full fine-tuning, or training from scratch. These services support single GPU to HyperPods (cluster of GPUs) for training and include built-in FMOps tools for tracking, debugging, and deployment.

With SageMaker AI, you can build generative AI-powered agentic workflows using a framework of your choice. Some of the key benefits of using SageMaker AI for fine-tuning and hosting LLMs or FMs include:

  • Ease of deployment – SageMaker AI offers access to SageMaker JumpStart, a curated model hub where models with open weights are made available for seamless deployment through a few clicks or API calls. Additionally, for Hugging Face Hub models, SageMaker AI provides pre-optimized containers built on popular open source hosting frameworks such as vLLM, NVIDIA Triton, and Hugging Face Text Generation Inference (TGI). You simply need to specify the model ID, and the model can be deployed quickly.
  • Instance-based deterministic pricing – SageMaker AI hosted models are billed based on instance-hours rather than token usage. This pricing model enables you to more accurately predict and manage generative AI inference costs while scaling resources to accommodate incoming request loads.
  • Deployments with quantization – SageMaker AI enables you to optimize models prior to deployment using advanced strategies such as quantized deployments (such as AWQ, GPTQ, float16, int8, or int4). This flexibility allows you to efficiently deploy large models, such as a 32-billion parameter model, onto smaller instance types like ml.g5.2xlarge with 24 GB of GPU memory, significantly reducing resource requirements while maintaining performance.
  • Inference load balancing and optimized routing – SageMaker endpoints support load balancing and optimized routing with various strategies, providing users with enhanced flexibility and adaptability to accommodate diverse use cases effectively.
  • SageMaker fine-tuning recipes – SageMaker offers ready-to-use recipes for quickly training and fine-tuning publicly available FMs such as Meta’s Llama 3, Mistral, and Mixtral. These recipes use Amazon SageMaker HyperPod (a SageMaker AI service that provides resilient, self-healing clusters optimized for large-scale ML workloads), enabling efficient and resilient training on a GPU cluster for scalable and robust performance.

 

解决方案概述


CrewAI为开发与AWS服务集成的多代理系统提供了一个强大的框架,特别是SageMaker AI。CrewAI基于角色的代理架构和全面的性能监控功能与Amazon CloudWatch协同工作。
该框架在工作流编排方面表现出色,并保持与AWS最佳实践一致的企业级安全标准,使其成为在AWS基础架构中实施复杂的基于代理的系统的组织的有效解决方案。
在这篇文章中,我们将演示如何使用CrewAI创建多代理研究工作流程。这个工作流程创建了两个代理:一个代理研究互联网上的一个主题,一个代理接受这项研究,并通过将其格式化为可读格式来充当编辑。此外,我们将指导您部署一个或多个LLM并将其集成到结构化工作流中,使用自动化操作工具,并在SageMaker AI上部署这些工作流,以实现生产就绪部署。
下图说明了解决方案架构。

Prerequisites

To follow along with the code examples in the rest of this post, make sure the following prerequisites are met:

  • Integrated development environment – This includes the following:
    • (Optional) Access to Amazon SageMaker Studio and the JupyterLab IDE – We will use a Python runtime environment to build agentic workflows and deploy LLMs. Having access to a JupyterLab IDE with Python 3.9, 3.10, or 3.11 runtimes is recommended. You can also set up Amazon SageMaker Studio for single users. For more details, see Use quick setup for Amazon SageMaker AI. Create a new SageMaker JupyterLab Space for a quick JupyterLab notebook for experimentation. To learn more, refer to Boost productivity on Amazon SageMaker Studio: Introducing JupyterLab Spaces and generative AI tools.
    • Local IDE – You can also follow along in your local IDE (such as PyCharm or VSCode), provided that Python runtimes have been configured for site to AWS VPC connectivity (to deploy models on SageMaker AI).
  • Permission to deploy models – Make sure that your user execution role has the necessary permissions to deploy models to a SageMaker real-time endpoint for inference. For more information, refer to Deploy models for inference.
  • Access to Hugging Face Hub – You must have access to Hugging Face Hub’s deepseek-ai/DeepSeek-R1-Distill-Llama-8B model weights from your environment.
  • Access to code – The code used in this post is available in the following GitHub repo.

Simplified LLM hosting on SageMaker AI

Before orchestrating agentic workflows with CrewAI powered by an LLM, the first step is to host and query an LLM using SageMaker real-time inference endpoints. There are two primary methods to host LLMs on SageMaker AI:

  • Deploy from SageMaker JumpStart
  • Deploy from Hugging Face Hub

Deploy DeepSeek from SageMaker JumpStart

SageMaker JumpStart offers access to a diverse array of state-of-the-art FMs for a wide range of tasks, including content writing, code generation, question answering, copywriting, summarization, classification, information retrieval, and more. It simplifies the onboarding and maintenance of publicly available FMs, allowing you to access, customize, and seamlessly integrate them into your ML workflows. Additionally, SageMaker JumpStart provides solution templates that configure infrastructure for common use cases, along with executable example notebooks to streamline ML development with SageMaker AI.

The following screenshot shows an example of available models on SageMaker JumpStart.

To get started, complete the following steps:

  1. Install the latest version of the sagemaker-python-sdk using pip.
  2. Run the following command in a Jupyter cell or the SageMaker Studio terminal:
pip install -U sagemaker
  1. List all available LLMs under the Hugging Face or Meta JumpStart hub. The following code is an example of how to do this programmatically using the SageMaker Python SDK:
from sagemaker.jumpstart.filters import (And, Or)
from sagemaker.jumpstart.notebook_utils import list_jumpstart_models

# generate a conditional filter to only select LLMs from HF or Meta
filter_value = Or(
    And("task == llm", "framework == huggingface"), 
    "framework == meta", "framework == deekseek"
)

# Retrieve all available JumpStart models
all_models = list_jumpstart_models(filter=filter_value)

For example, deploying the deepseek-llm-r1 model directly from SageMaker JumpStart requires only a few lines of code:

from sagemaker.jumpstart.model import JumpStartModel

model_id = " deepseek-llm-r1" 
model_version = "*"

# instantiate a new JS meta model
model = JumpStartModel(
    model_id=model_id, 
    model_version=model_version
)

# deploy model on a 1 x p5e instance 
predictor = model.deploy(
    accept_eula=True, 
    initial_instance_count=1, 
    # endpoint_name="deepseek-r1-endpoint" # optional endpoint name
)

We recommend deploying your SageMaker endpoints within a VPC and a private subnet with no egress, making sure that the models remain accessible only within your VPC for enhanced security.

We also recommend you integrate with Amazon Bedrock Guardrails for increased safeguards against harmful content. For more details on how to implement Amazon Bedrock Guardrails on a self-hosted LLM, see Implement model-independent safety measures with Amazon Bedrock Guardrails.

Deploy DeepSeek from Hugging Face Hub

Alternatively, you can deploy your preferred model directly from the Hugging Face Hub or the Hugging Face Open LLM Leaderboard to a SageMaker endpoint. Hugging Face LLMs can be hosted on SageMaker using a variety of supported frameworks, such as NVIDIA Triton, vLLM, and Hugging Face TGI. For a comprehensive list of supported deep learning container images, refer to the available Amazon SageMaker Deep Learning Containers. In this post, we use a DeepSeek-R1-Distill-Llama-70B SageMaker endpoint using the TGI container for agentic AI inference. We deploy the model from Hugging Face Hub using Amazon’s optimized TGI container, which provides enhanced performance for LLMs. This container is specifically optimized for text generation tasks and automatically selects the most performant parameters for the given hardware configuration. To deploy from Hugging Face Hub, refer to the GitHub repo or the following code snippet:

import json
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri
import os
from datetime import datetime

# Model configuration
hub = {'HF_MODEL_ID':'deepseek-ai/DeepSeek-R1-Distill-Llama-70B', #Llama-3.3-70B-Instruct
       'SM_NUM_GPUS': json.dumps(number_of_gpu),
       'HF_TOKEN': HUGGING_FACE_HUB_TOKEN,
       'SAGEMAKER_CONTAINER_LOG_LEVEL': '20',  # Set to INFO level
       'PYTORCH_CUDA_ALLOC_CONF': 'expandable_segments:True'  # configure CUDA memory to use expandable memory segments
}
# Create and deploy model
huggingface_model =   HuggingFaceModel(image_uri=get_huggingface_llm_image_uri("huggingface", 
version="2.3.1"),
env=hub,
role=role,sagemaker_session=sagemaker_session)
predictor = huggingface_model.deploy(
               initial_instance_count=1,
               instance_type="ml.p4d.24xlarge"
               endpoint_name=custom_endpoint_name,
               container_startup_health_check_timeout=900)

A new DeepSeek-R1-Distill-Llama-70B endpoint should be InService in under 10 minutes. If you want to change the model from DeepSeek to another model from the hub, simply replace the following parameter or refer to the DeepSeek deploy example in the following GitHub repo. To learn more about deployment parameters that can be reconfigured inside TGI containers at runtime, refer to the following GitHub repo on TGI arguments.

...
"HF_MODEL_ID": "deepseek-ai/...", # replace with any HF hub models
# "HF_TOKEN": "hf_..." # add your token id for gated models
...

For open-weight models deployed directly from hubs, we strongly recommend placing your SageMaker endpoints within a VPC and a private subnet with no egress, making sure that the models remain accessible only within your VPC for a secure deployment.

Build a simple agent with CrewAI

CrewAI offers the ability to create multi-agent and very complex agentic orchestrations using LLMs from several LLM providers, including SageMaker AI and Amazon Bedrock. In the following steps, we create a simple blocks counting agent to serve as an example.

Create a blocks counting agent

The following code sets up a simple blocks counter workflow using CrewAI with two main components:

  • Agent creation (blocks_counter_agent) – The agent is configured with a specific role, goal, and capabilities. This agent is equipped with a tool called BlocksCounterTool.
  • Task definition (count_task) – This is a task that we want this agent to execute. The task includes a template for counting how many of each color of blocks are present, where {color} will be replaced with actual color of the block. The task is assigned to blocks_counter_agent.
from crewai import Agent, Task
from pydantic import BaseModel, Field

# 1. Configure agent
blocks_counter_agent = Agent(
    role="Blocks Inventory Manager",
    goal="Maintain accurate block counts",
    tools=[BlocksCounterTool],
    verbose=True
)

# 2. Create counting task
count_task = Task(
    description="Count {color} play blocks in storage",
    expected_output="Exact inventory count for specified color",
    agent=blocks_counter_agent
)

As you can see in the preceding code, each agent begins with two essential components: an agent definition that establishes the agent’s core characteristics (including its role, goal, backstory, available tools, LLM model endpoint, and so on), and a task definition that specifies what the agent needs to accomplish, including the detailed description of work, expected outputs, and the tools it can use during execution.

This structured approach makes sure that agents have both a clear identity and purpose (through the agent definition) and a well-defined scope of work (through the task definition), enabling them to operate effectively within their designated responsibilities.

Tools for agentic AI

Tools are special functions that give AI agents the ability to perform specific actions, like searching the internet or analyzing data. Think of them as apps on a smartphone—each tool serves a specific purpose and extends what the agent can do. In our example, BlocksCounterTool helps the agent count the number of blocks organized by color.

Tools are essential because they let agents do real-world tasks instead of just thinking about them. Without tools, agents would be like smart speakers that can only talk—they could process information but couldn’t take actual actions. By adding tools, we transform agents from simple chat programs into practical assistants that can accomplish real tasks.

Out-of-the-box tools with CrewAI
Crew AI offers a range of tools out of the box for you to use along with your agents and tasks. The following table lists some of the available tools.

Category Tool Description
Data Processing Tools FileReadTool For reading various file formats
Web Interaction Tools WebsiteSearchTool For web content extraction
Media Tools YoutubeChannelSearchTool For searching YouTube channels
Document Processing PDFSearchTool For searching PDF documents
Development Tools CodeInterpreterTool For Python code interpretation
AI Services DALL-E Tool For image generation

Build custom tools with CrewAI
You can build custom tools in CrewAI in two ways: by subclassing BaseTool or using the @tool decorator. Let’s look at the following BaseTool subclassing option to create the BlocksCounterTool we used earlier:

from crewai.tools import BaseTool

class BlocksCounterTool(BaseTool):
    name = "blocks_counter" 
    description = "Simple tool to count play blocks"

    def _run(self, color: str) -> str:
        return f"There are 10 {color} play blocks available"

Build a multi-agent workflow with CrewAI, DeepSeek-R1, and SageMaker AI
 

Multi-agent AI systems represent a powerful approach to complex problem-solving, where specialized AI agents work together under coordinated supervision. By combining CrewAI’s workflow orchestration capabilities with SageMaker AI based LLMs, developers can create sophisticated systems where multiple agents collaborate efficiently toward a specific goal. The code used in this post is available in the following GitHub repo.

Let’s build a research agent and writer agent that work together to create a PDF about a topic. We will use a DeepSeek-R1 Distilled Llama 3.3 70B model as a SageMaker endpoint for the LLM inference.

Define your own DeepSeek SageMaker LLM (using LLM base class)

The following code integrates SageMaker hosted LLMs with CrewAI by creating a custom inference tool that formats prompts with system instructions for factual responses, uses Boto3, an AWS core library, to call SageMaker endpoints, and processes responses by separating reasoning (before </think>) from final answers. This enables CrewAI agents to use deployed models while maintaining structured output patterns.

# Calls SageMaker endpoint for DeepSeek inference
def deepseek_llama_inference(prompt: dict, endpoint_name: str, region: str = "us-east-2") -> dict:
    try:
        # ... Response parsing Code...

    except Exception as e:
        raise RuntimeError(f"Error while calling SageMaker endpoint: {e}")

# CrewAI-compatible LLM implementation for DeepSeek models on SageMaker.
class DeepSeekSageMakerLLM(LLM):
    def __init__(self, endpoint: str):
        # <... Initialize LLM with SageMaker endpoint ...>

    def call(self, prompt: Union[List[Dict[str, str]], str], **kwargs) -> str:
        # <... Format and return the final response ...>

Name the DeepSeek-R1 Distilled endpoint
Set the endpoint name as defined earlier when you deployed DeepSeek from the Hugging Face Hub:

deepseek_endpoint = "deepseek-r1-dist-v3-llama70b-2025-01-22"

Create a DeepSeek inference tool
Just like how we created the BlocksCounterTool earlier, let’s create a tool that uses the DeepSeek endpoint for our agents to use. We use the same BaseTool subclass here, but we hide it in the CustomTool class implementation in sage_tools.py in the tools folder. For more information, refer to the GitHub repo.

from crewai import Crew, Agent, Task, Process 

# Create the Tool for LLaMA inference
deepseek_tool = CustomTool(
    name="deepseek_llama_3.3_70B",
    func=lambda inputs: deepseek_llama_inference(
        prompt=inputs,
        endpoint_name=deepseek_endpoint
    ),
    description="A tool to generate text using the DeepSeek LLaMA model deployed on SageMaker."
)

Create a research agent
Just like the simple blocks agent we defined earlier, we follow the same template here to define the research agent. The difference here is that we give more capabilities to this agent. We attach a SageMaker AI based DeepSeek-R1 model as an endpoint for the LLM.

This helps the research agent think critically about information processing by combining the scalable infrastructure of SageMaker with DeepSeek-R1’s advanced reasoning capabilities.

The agent uses the SageMaker hosted LLM to analyze patterns in research data, evaluate source credibility, and synthesize insights from multiple inputs. By using the deepseek_tool, the agent can dynamically adjust its research strategy based on intermediate findings, validate hypotheses through iterative questioning, and maintain context awareness across complex information it gathers.

# Research Agent

research_agent = Agent(
    role="Research Bot",
    goal="Scan sources, extract relevant information, and compile a research summary.",
    backstory="An AI agent skilled in finding relevant information from a variety of sources.",
    tools=[deepseek_tool],
    allow_delegation=True,
    llm=DeepSeekSageMakerLLM(endpoint=deepseek_endpoint),
    verbose=False
)

Create a writer agent
The writer agent is configured as a specialized content editor that takes research data and transforms it into polished content. This agent works as part of a workflow where it takes research from a research agent and acts like an editor by formatting the content into a readable format. The agent is used for writing and formatting, and unlike the research agent, it doesn’t delegate tasks to other agents.

writer_agent = Agent(
    role="Writer Bot",
    goal="Receive research summaries and transform them into structured content.",
    backstory="A talented writer bot capable of producing high-quality, structured content based on research.",
    tools=[deepseek_tool],
    allow_delegation=False,
    llm=DeepSeekSageMakerLLM(endpoint=deepseek_endpoint),
    verbose=False
)

Define tasks for the agents
Tasks in CrewAI define specific operations that agents need to perform. In this example, we have two tasks: a research task that processes queries and gathers information, and a writing task that transforms research data into polished content.

Each task includes a clear description of what needs to be done, the expected output format, and specifies which agent will perform the work. This structured approach makes sure that agents have well-defined responsibilities and clear deliverables.

Together, these tasks create a workflow where one agent researches a topic on the internet, and another agent takes this research and formats it into readable content. The tasks are integrated with the DeepSeek tool for advanced language processing capabilities, enabling a production-ready deployment on SageMaker AI.

research_task = Task(
    description=(
        "Your task is to conduct research based on the following query: {prompt}.\n"
    ),
    expected_output="A comprehensive research summary based on the provided query.",
    agent=research_agent,
    tools=[deepseek_tool]
)

writing_task = Task(
    description=(
              "Your task is to create structured content based on the research provided.\n""),
    expected_output="A well-structured article based on the research summary.",
    agent=research_agent,
    tools=[deepseek_tool]
)

Define a crew in CrewAI
A crew in CrewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow. In this specific example, the sequential process makes sure tasks are executed one after the other, following a linear progression. There are other more complex orchestrations of agents working together, which we will discuss in future blog posts.

This approach is ideal for projects requiring tasks to be completed in a specific order. The workflow creates two agents: a research agent and a writer agent. The research agent researches a topic on the internet, then the writer agent takes this research and acts like an editor by formatting it into a readable format.

Let’s call the crew scribble_bots:

# Define the Crew for Sequential Workflow # 

scribble_bots = Crew( agents=[research_agent, writer_agent], 
       tasks=[research_task, writing_task], 
       process=Process.sequential # Ensure tasks execute in sequence)

Use the crew to run a task
We have our endpoint deployed, agents created, and crew defined. Now we’re ready to use the crew to get some work done. Let’s use the following prompt:

result = scribble_bots.kickoff(inputs={"prompt": "What is DeepSeek?"})

Our result is as follows:

**DeepSeek: Pioneering AI Solutions for a Smarter Tomorrow**

In the rapidly evolving landscape of artificial intelligence, 
DeepSeek stands out as a beacon of innovation and practical application. 
As an AI company, DeepSeek is dedicated to advancing the field through cutting-edge research and real-world applications, 
making AI accessible and beneficial across various industries.

**Focus on AI Research and Development**

………………….. ………………….. ………………….. …………………..

Clean up

Complete the following steps to clean up your resources:

  1. Delete your GPU DeekSeek-R1 endpoint:
import boto3

# Create a low-level SageMaker service client.
sagemaker_client = boto3.client('sagemaker', region_name=<region>)

# Delete endpoint
sagemaker_client.delete_endpoint(EndpointName=endpoint_name)
  1. If you’re using a SageMaker Studio JupyterLab notebook, shut down the JupyterLab notebook instance.

Conclusion

In this post, we demonstrated how you can deploy an LLM such as DeepSeek-R1—or another FM of your choice—from popular model hubs like SageMaker JumpStart or Hugging Face Hub to SageMaker AI for real-time inference. We explored inference frameworks like Hugging Face TGI which helps streamline deployment while integrating built-in performance optimizations to minimize latency and maximize throughput. Additionally, we showcased how the SageMaker developer-friendly Python SDK simplifies endpoint orchestration, allowing seamless experimentation and scaling of LLM-powered applications.

Beyond deployment, this post provided an in-depth exploration of agentic AI, guiding you through its conceptual foundations, practical design principles using CrewAI, and the seamless integration of state-of-the-art LLMs like DeepSeek-R1 as the intelligent backbone of an autonomous agentic workflow. We outlined a sequential CrewAI workflow design, illustrating how to equip LLM-powered agents with specialized tools that enable autonomous data retrieval, real-time processing, and interaction with complex external systems.

Now, it’s your turn to experiment! Dive into our publicly available code on GitHub, and start building your own DeepSeek-R1-powered agentic AI system on SageMaker. Unlock the next frontier of AI-driven automation—seamlessly scalable, intelligent, and production-ready.

Special thanks to Giuseppe Zappia, Poli Rao, and Siamak Nariman for their support with this blog post.

本文地址
最后修改
星期二, 九月 23, 2025 - 21:39
Tags
 
Article