STACKIT LLM: Using language models effectively

Language is the key to information, communication and innovation. At a time when artificial intelligence is increasingly shaping our everyday lives, large language models (LLMs) are rapidly gaining in importance. Whether in the automated creation of content, in intelligent chatbots or in the analysis of large amounts of information: Companies of all sizes are using the capabilities of generative AI to improve processes and develop new applications.
With its AI Model Serving product, STACKIT offers a sovereign, secure and scalable platform for the use of LLMs – trained on billions of words, provided in data centers in Germany and Austria. This allows generative AI models to be operated in compliance with GDPR and used productively – for example to develop applications in different languages, to train your own models or to integrate them into existing services. Read on to find out what sets STACKIT’s platform for LLMs apart.
Glossary: Important terms relating to STACKIT LLMs
- LLM (Large Language Model): A language model based on billions of parameters and trained by machine learning. It can understand and generate natural language and apply it in different contexts.
- Generative AI: An area of artificial intelligence in which content such as text, images or code is generated automatically – based on previously trained models.
- Model: Refers to a trained AI system that processes input and delivers results or predictions. LLMs are models for language, especially for generating texts.
- Training: The process by which an AI model learns patterns, structures and relationships from a large amount of text data. This develops and improves its text processing capabilities.
- Inference: The application of a trained model to new input – for example to answer questions, complete texts or generate content.
- Prompting: The method by which a model is controlled by specific input (“prompts”). Example: “Create a summary of this text.”
- ChatGPT: A well-known LLM from OpenAI that is able to conduct human-like dialogs. It is based on GPT technology (“Generative Pre-trained Transformer”).
- STACKIT AI Model Serving: A managed service from STACKIT that enables companies to use their own or pre-trained AI models (e.g. LLMs) productively – with a focus on security, control and European sovereignty.
- Token: A building block in language processing – such as a part of speech or word – that is processed by the model. Depending on the size of the model, billions of tokens can be used for training.
- Inference endpoint: An API interface via which an AI model can be called productively – e.g. for integration into chatbots or other applications.
- Parameter: Refers to the internally learned weightings of an LLM. The more parameters a model contains, the more finely it can recognize and distinguish linguistic structures.
- Google: A global technology company that has developed its own language models such as PaLM. These are used, for example, for the automated answering of questions or semantic searches.
LLMs with STACKIT: Your advantages at a glance
Large language models bring many advantages – but also challenges. Companies need a platform that not only provides powerful models efficiently, but also securely and in compliance with the law.
This is exactly where STACKIT AI Model Serving comes in: It provides an environment in which LLMs can be used reliably and in compliance with GDPR – without compromising on performance or control.
Your benefits with STACKIT AI Model Serving:
- Data sovereignty: all models are operated in data centers in Germany and Austria. Data never leaves the European area. This protects sensitive content and fulfills legal requirements.
- Security: The infrastructure is ISO/IEC 27001-certified. Network isolation, encryption and role-based access controls ensure comprehensive protection.
- Scalability: Whether test model or productive operation with high request volumes – STACKIT allows flexible provision and use of generative models, tailored to your requirements.
- Flexibility: Use pre-trained open source models or integrate your own models. The connection via REST-API enables fast and uncomplicated integration into existing processes.
- Control and transparency: You decide which models are used, how often they may be called up and to what extent resources are provided.
STACKIT brings generative intelligence to your applications – controlled, secure and on European infrastructure.
LLMs in detail: What they do and how they work
Large Language Models are based on machine learning and process billions of words to recognize language patterns, meanings and relationships between terms. Models are trained on huge amounts of data – often from publicly accessible texts – and learn how human language works.
What is special about LLMs is that they do not process information in the traditional sense, but recognize statistical probabilities for text sequences. This enables them to create and understand content and provide relevant answers – even in a specific area such as law, IT or customer service.
This results in applications with high practical relevance – for example for German-speaking countries, where GDPR-compliant use is particularly important. GPT models, such as those used in ChatGPT, are well-known examples – trained with hundreds of billions of parameters that model language.
Numerous applications are created with the ability to generate text:
- Automated chatbots that answer customer queries in different languages
- Text creation for marketing or technical documentation
- Semantic search in large databases
- Translation and localization, adapted to industry-specific requirements
- Intelligent assistance systems for developers, authorities or customer service
STACKIT AI Model Serving makes this technology accessible to European companies – as a managed service with complete control over the model used, the training data and the content generated.
Tips for the successful use of LLMs with STACKIT
The productive use of LLMs requires more than just a powerful model. A well thought-out setup, clear rules for use and a secure technical environment are crucial. STACKIT AI Model Serving offers optimal framework conditions for this – but there are also a few points to consider on the user side.
1. make a targeted choice of model: Not every model is suitable for every use case. For simple chatbots, more compact models with lower resource requirements are sufficient. For demanding tasks such as legal text analysis or technical documentation, larger models with high linguistic competence in different languages are more suitable. STACKIT supports various open source models and allows you to import your own variants.
2. structure and test prompts: The quality of the output depends heavily on the prompt entered. Use targeted, precise formulations. Test different variants to achieve the optimum result. Few-shot learning”, i.e. the use of a few examples, can also significantly improve the quality of the results.
3. regulate security and access: Use the existing functions to restrict access. These include API tokens, role-based assignment of rights and integration into dedicated networks (VPCs). This ensures that only authorized applications and persons can access your models.
4. Plan and scale resources: Planning ahead is important, especially for large models and higher query volumes. STACKIT allows you to provide inference resources as required – with automatic scaling as the workload increases. The pay-per-use model enables transparent billing without minimum runtimes.
5. Design data protection and training responsibly: Design data protection and training responsibly: When training your own models, the information used must be carefully selected. Pay attention to the origin, structure and legal framework – especially in the learning context, where content from external areas is processed.
6. Use monitoring: STACKIT provides comprehensive monitoring functions to monitor usage, performance and system utilization. This allows you to identify bottlenecks or unusual activities at an early stage and take appropriate action.
Paying attention to these points lays the foundation for the successful and secure use of LLMs – and allows you to exploit the full potential of generative AI.
STACKIT – the right platform for LLMs
The large language models are changing the way companies process information, understand language and generate content. Whether automated text creation, intelligent chatbots or the analysis of unstructured data: LLMs offer a wide range of possible applications – in different areas, in different languages, for different tasks.
STACKIT AI Model Serving provides you with the right platform for this: GDPR-compliant, flexibly scalable and fully under European control. You benefit from a modern infrastructure that combines security, availability and control – and can use powerful generative models such as GPT-based systems productively at the same time.
Deployment is simple, efficient and integration-capable – with REST API and full control over the parameters used and learning processes. This allows you to use generative language models in a targeted manner, discover new possibilities and build productive systems in a short space of time. Whether a standard model or your own development: STACKIT provides the framework for successfully establishing artificial intelligence in language and text in your company.
FAQ: LLMs with STACKIT
What is a Large Language Model (LLM)?
An LLM is an AI system that is based on billions of parameters and is trained using machine learning. It can process information, create content and differentiate between language elements – in specific areas, including German.
How does STACKIT AI Model Serving work?
STACKIT provides you with a managed service that allows you to operate your own or pre-trained LLMs in a secure and scalable manner. You receive access via an API and can integrate the models into your applications – fully GDPR-compliant and operated in European data centers.
Which models can I use?
You can use open source models (such as models from the GPT family or Hugging Face) or upload your own models. STACKIT offers a flexible environment for different model types that are designed for text generation, language understanding or classification.
Is the use of LLMs on STACKIT safe?
Yes, STACKIT operates all services in ISO/IEC 27001-certified data centers in Germany and Austria. The highest security standards apply, including network isolation, encryption and role-based access control. Your data and models remain completely under your control.
What does it cost to operate an LLM with STACKIT?
Billing is based on the pay-per-use principle. You only pay for the resources you actually use – with no minimum term or upfront payment. The prices of AI Model Serving are based on the number of tokens processed (input and output) and the duration of use per hour. This allows you a flexible and cost-transparent entry into the world of generative language models.
