These FAQs address common questions about the AI assistant and other AI features in the Axios HQ platform.
Axios HQ is committed to responsible AI deployment, recognizing its potential risks and challenges. That’s why:
- Customer data is not used to train any AI models
- Our AI features don’t involve profiling or high-risk automated decisions
- We offer the ability to opt-out of AI features entirely, if desired
Table of contents:
AI Assistant FAQ
General Information
What happened to Smart Brevify, Tone Variation, Translate, and Brainstorm?
- We combined these various AI features and combined them into the assistant. You can simply ask the assistant to change the tone or translate your text.
How can I access the “Getting Started Guide” and “Best Practices” documents?
- The “Getting Started Guide” and “Best Practices” are only available to customers. Click this link and use your email associated with your HQ account to view these documents.
What’s the difference between creating a Smart Brevity card and creating a Smart Brevity card template?
- The purpose of creating a Smart Brevity card is to transform your raw text or topic description into a fully written, concise, and impactful card. This is most helpful for when you need a ready-to-use card for your edition.
- The purpose of creating a Smart Brevity card template is to create a structured outline with instructions on how to write a card in the Smart Brevity style.This is the most helpful for when you want guidance on how to structure and write the content yourself.
- The assistant can create and handle up to 5 cards at once. For more than 5, please split your requests.
Assistant Capabilities
What are the assistant primary use cases and capabilities?
- The assistant has been developed “by Communication experts for Communication experts” to streamline the internal communications workflow, from generative AI writing assistance, to communication strategy, insights and recommendations.
- You can talk to the assistant as if it was your personal communication assistant. You can instruct it directly to perform any writing assistance tasks you need, or request assistance.
- The current core capabilities include:
-
- Writing a card in the Smart Brevity format, starting with long draft text, disordered notes or just an idea.
- Analyzing long text and extracting key highlights to help you break it out into an outline for one or multiple cards.
- Generating a helpful card template so you or your collaborators can fill in the content with your/their own writing.
- Exploring your archive and finding key follow-up topics/cards for your upcoming edition.
- Supporting any writing assistance custom requests.
- and more…
How is the assistant different from ChatGPT?
- The assistant uses the same “foundational” model as ChatGPT.
- You can interact with the assistant in the same conversational way as with ChatGPT.
- You can do with the assistant anything you would do with ChatGPT, but with better results for internal communications.
- The assistant has fine tuned ChatGPT’s LLM specifically for use by Communications experts.
- It provides HQ Smart Brevity writing expertise and guidance: The Assistant has been trained by HQ’s Communication experts to write like an HQ Communication expert, following the Smart Brevity editorial guidance and format.
- It provides HQ Comms strategy expertise and guidance: The Assistant is connected to HQ domain knowledge data that include the complete Smart Brevity editorial guidance and a complete library of card templates. This allows the Assistant to provide you with editorial guidance and card template recommendations.
- It provides personalized recommendations based on your data: The Assistant is connected to your communication archive. It can assist you in searching and answering questions about cards/topics you sent in the past and suggest personalized follow-up recommendations to improve your communication.
How is assistant similar to ChatGPT?
- Assistant uses the same foundational model as ChatGPT, allowing conversational interaction and performing similar tasks but with better results for internal communications.
Document upload
What type of files are supported with document upload?
- We support .pdf and .docx files at this time.
Are there limitations with documents with images?
- Documents with images can be uploaded, however, the assistant will not be able to read content, like a chart or graph, associated with the image.
Is there a size limitation for documents that can be uploaded?
- Yes, you can upload documents less than 10 MB. Up to 3 documents can be uploaded at one time.
Is there a word limit?
- Yes, there is a 30,000 word limit across all files. If you upload 3 files, it will only take the first 10,000 words from each file. If you only have one file, it will take the first 30,000 words. Note: 10,000 words is roughly ~20 pages of single spaced text.
What if the document has more than the word limit?
- It will not fail the document upload, it will accept it. However, it will truncate the content and produce a message to users “content was too long, and assistant is only looking at some of the content.”
What actions or prompts are available when you upload a document?
- Write a card, create outline, and create template
- Analyze and answer questions about uploaded documents
Note: When in the same chat, if a second document is uploaded, it will use the same rule as the first document as to the action the assistant will take with the content.
AI Functionality and Data Security
How do Axios HQ’s AI tools and features work?
- Our AI features extract data from your query/request and your content (input), then process it through HQ’s proprietary prompts to suggest text copy matching your request (output).
- We do this by leveraging third-party large language models (LLMs) supplied by our vendors, like OpenAI and Anthropic, and our own proprietary models trained off of Axios Media’s Smart Brevity® news content.
Does HQ train its models on my data?
- No, we don’t train any LLMs or other machine learning models on your data. All of your content, including inputs and outputs, is logically segregated from other accounts (a security standard known as “Tenant Isolation”). As a result, it cannot be returned as output to another company’s query or request to the AI assistant.
How does Axios HQ protect my data?
- Axios HQ protects the security, confidentiality and integrity of all customer data using industry-standard technical and organizational security measures.
- In addition:
- Your content is confidential, including the inputs and outputs from our AI features.
- Your content isn’t used to train any AI models.
- Inputs and outputs are deleted by our LLM vendors at regular intervals, typically within 30 days (unless otherwise required by law).
- HQ securely destroys all of your data after your subscription ends (excluding de-identified metadata and telemetry data) in accordance with your contract.
- We comply with all laws applicable to Axios HQ’s processing of your data (including data privacy and protection laws), and we require our vendors to do the same.
Does the assistant learn over time about my communication style?
- Not yet, but it is something we are exploring.
Is the assistant connected to the internet?
- No.
- The assistant is NOT connected to the internet. If you ask the Assistant general knowledge questions, it may refuse to answer you or give you a response based on its training data. In some cases it may hallucinate. The assistant should not be used for general fact check purposes. For Hallucination see section: “How is hallucination addressed?”
What safeguards prevent the AI from visiting malicious sites?
- The assistant is NOT connected to the internet.
Who owns the output from Axios HQ’s AI tools and features?
- You own the output created with Axios HQ’s AI tools, just like you own any other content you upload into or create while using the platform. Be aware, however, that AI output is probably not eligible for copyright protection.
What kind of AI features are incorporated into Axios HQ?
- Axios HQ uses generative AI features to assist users at every stage of the communications process. Whether you need help brainstorming, writing or editing, Axios HQ’s AI assistant has you covered.
- We don’t use “high-risk” AI systems (as defined under applicable law) within the HQ platform. Further, none of the AI incorporated into the HQ platform is used for profiling that analyzes or predicts individual behavior, and it does not make automated decisions that would impact the opportunities, rights or freedoms of any individual.
How do I know if the output is accurate?
- We rigorously test our prompts to ensure the quality of the outputs returned. However, any AI tool can sometimes produce inaccurate output. The assistant is not a “fact checking” tool. Keeping a “human in the loop” is critical using Axios HQ’s AI features effectively. Ultimately, it is your responsibility to QC the output provided by our AI features before you use it.
How does the assistant handle context and maintain continuity over the course of a conversation? Does the assistant’s chat history save from one edition to another?
- The assistant is always aware of which edition you are working on, and the metadata associated with that edition (which series it belongs to, who is the writer (you), etc…
- The assistant is taking into account all the data present in the conversation history. This includes all the conversation exchanges, all the text material pasted or uploaded into the conversation, generated outputs, queried archive data.
- The conversation is user specific and edition specific. In other words each user has a private conversation thread with the assistant within each specific edition.
- It is recommended to clear the conversation thread after completing one or more tasks requiring the same context, and before switching to others unrelated tasks. This helps avoid potential ambiguities and save tokens.
Who can I talk to if I have more questions?
- Your Axios HQ rep can help you with any additional questions you may have, including by setting up calls with our technology, security and compliance teams.
AI General FAQ
Which HQ feature uses “in house” vs “third party” models?
- Axios HQ uses “in house” models for all Smart Brevity Guidances features (the right side panel of the compose experience). It includes a combination of “Quantitative” AI models (formatting recommendations, bulleting, bolding, card scoring, Subject line scoring, axiom recommendations) and “Generative” AI models (filler language, clarity recommendations, long sentence replacement, block breakup)
- Axios HQ uses “third party” foundational pre-trained models, in combination with HQ proprietary prompt engineering and HQ domain knowledge data, for generative AI assistance features such as: the assistant, tone variation, advance rephrasing (shorten, professionalize, activate), Autocomplete and Thoughts starters assistant, translation, card headline generation, subject line generation, and image generation.
How do you ensure no training data violates another party’s IP/copyright?
- Axios In-House Models:
- When training or fine tuning our in-house AI we only use Axios HQ’s content.
- We do not use our users’ data to fine tune our in-house models.
- Third-Party Models:
- While we can't control the foundational model training data used by our third-party AI vendors (Open AI and Anthropic) , we receive indemnification from them for copyright infringing outputs, and we flow-down this protection to the customer in our contracts.
- We don’t allow our third-party AI vendors to train their models on any Axios HQ customer data.
What content moderation and filtering mechanisms are applied? What controls are in place for offensive/harmful language?
- We use moderation checks provided by third party LLMs.
How is prompt drift addressed?
- We evaluate performance against a static reference set and benchmark the model’s performance to identify drift. We also proactively intervene by monitoring prompt effectiveness for our users.
How is the data being secured in transit and at rest?
- We encrypt all data in transit using Transport Layer Security (TLS) version 1.3 and at rest using 256 bit Advanced Encryption Standard (AES). We never export your data from the production environment, and only those with a business need have access. All access is logged and monitored.
For more details about our security protocols, visit our Trust Center.
What are the details of model monitoring?
- We evaluate performance against a reference set and benchmark the model’s performance to identify drift. Monitoring includes user engagement for prompt effectiveness and back-end data for debugging. We are partnered with ArthurAI as a model monitoring and observability vendor.
How is bias addressed?
- Our teams place an emphasis on creating responsible machine learning products. This includes careful problem design, employing technical methods to mitigate bias during development (such as model grounding), continuous monitoring, and following practices recommended by the research community. When fine tuning models, we curate Smart Brevity™ content, which grounds the model output and makes it less likely to produce biased content. This is an active area of research that we follow closely. More details on our Smart Brevity™ guidance can be viewed in our Help Center.
How is hallucination addressed?
- Our approach to AI is designed around human-in-the-loop interactions. The generated content serves as an aid to help the user become an editor, instead of a writer.
- HQ AI is not used for fact check purposes where hallucinations are problematic. The hallucination rate is typically not a factor in the product experience where the facts and context are mainly provided by the user and taken into account by the AI generative models, thus greatly reducing the occurrences of hallucination.
- We have developed guardrails that check for hallucination in model inputs and outputs, as well as evals to check LLM output quality. If the checks fail, we have fallback behavior in place to address hallucination, if it is detected. We ensure that the output is never served to the user as is.
- As users continue to use HQ, our checks on model hallucination will continue to improve.
How long do you retain inactive models?
- Our models are not client-specific. They are trained on an HQ curated dataset (not user data). There is no client data exposure or retention risk in HQ model management. Our models are trained for the purpose of addressing specific Smart Brevity™ tasks. Once a model is replaced with a new version, we only retain an inactive model while transitioning users to a new one.
What countermeasures do you have in place to prevent poisoning attacks on models?
Poisoning attacks are performed at training by injecting content crafted to undermine model performance. Any risk of this happening has been mitigated by:
- Limiting training data to Axios HQ’s curated data (not user data)
- Monitoring model performance over time to detect drift or other unexpected changes in performance
Do you have real-time model monitoring and model drift detection processes or mechanisms?
- We evaluate performance against a reference set and benchmark the model’s performance to identify drift. Monitoring includes user engagement for prompt effectiveness and back-end data for debugging. We are partnered with ArthurAI as a model monitoring and observability vendor.
Can Axios HQ use my data in other ways?
- We only use your data to deliver and improve the services for you. When your subscription ends, all of your content and any data that is identifiable to your company or any of your users is securely destroyed.
- We retain and use deidentified telemetry data to ensure the Axios HQ Platform (including the AI Assistant and other AI features) functions correctly for all users and meets user expectations.
- We also retain and use deidentified metadata created from aggregated customer content to enhance our AI features and improve the HQ platform. This metadata includes data about trends within the aggregated content, but does not include the customer content itself.
- For example: If the AI Assistant sees you are writing an open enrollment newsletter, it may offer suggestions for what to include based on topics commonly covered open enrollment newsletters. It might also suggest the best timing or cadence for sending to ensure optimal open rates. It WILL NOT, however, offer suggestions that include any content from another company’s newsletters. That data is logically segregated and not used for model training.
What is AIDD?
- “AIDD” means AI-Derived-Data. It is a term unique to Axios HQ.
- AIDD retrieves and analyzes a user's previous sends, so the generated content is more aligned with the topics and intent from that user's previous cards, editions, and series. We do not mix a user’s data with another user's data. We do not use the data from one user to train a model used by other users.
- Other products in the industry may refer to this as “RAG” or “ (Retrieval Augmented Generation.”) is the industrial term.
Glossary
This is how Axios HQ uses these terms, in this evolving field.
Generative AI: This describes algorithms that can be used to create new content, including audio, code, images, text, simulations, and videos. Generative AI is a form of Machine Learning.
-
- In Axios HQ, these are product features that produce text or images (Autocomplete, Image generation)
Machine learning model: Our generic term for a statistical model or deep neural network. These models are tuned to perform well on a specific task (like classification), rather than being good at a range of tasks like generative AI. These models learn how to perform the task from training/fine-tuning data - which we limit to content produced by Axios HQ and Axios Media.
-
- In Axios HQ, you can find these models behind Smart Brevity Guidance in the editor (recommendations to add axioms or bolding, for example)
Artificial intelligence (AI): An umbrella term for generative AI, machine learning and deep learning.
❗Contact your account team or help@axioshq.com to opt-out of AI features.