STACKIT LLM: Using language models effectively

LLM

Language is the key to information, communication and innovation. At a time when artificial intelligence is increasingly shaping our everyday lives, large language models (LLMs) are rapidly gaining in importance. Whether in the automated creation of content, in intelligent chatbots or in the analysis of large amounts of information: Companies of all sizes are using the capabilities of generative AI to improve processes and develop new applications.

With its AI Model Serving product, STACKIT offers a sovereign, secure and scalable platform for the use of LLMs – trained on billions of words, provided in data centers in Germany and Austria. This allows generative AI models to be operated in compliance with GDPR and used productively – for example to develop applications in different languages, to train your own models or to integrate them into existing services. Read on to find out what sets STACKIT’s platform for LLMs apart.

Glossary: Important terms relating to STACKIT LLMs

LLMs with STACKIT: Your advantages at a glance

Large language models bring many advantages – but also challenges. Companies need a platform that not only provides powerful models efficiently, but also securely and in compliance with the law.

This is exactly where STACKIT AI Model Serving comes in: It provides an environment in which LLMs can be used reliably and in compliance with GDPR – without compromising on performance or control.

Your benefits with STACKIT AI Model Serving:

STACKIT brings generative intelligence to your applications – controlled, secure and on European infrastructure.


LLMs in detail: What they do and how they work

Large Language Models are based on machine learning and process billions of words to recognize language patterns, meanings and relationships between terms. Models are trained on huge amounts of data – often from publicly accessible texts – and learn how human language works.

What is special about LLMs is that they do not process information in the traditional sense, but recognize statistical probabilities for text sequences. This enables them to create and understand content and provide relevant answers – even in a specific area such as law, IT or customer service.

This results in applications with high practical relevance – for example for German-speaking countries, where GDPR-compliant use is particularly important. GPT models, such as those used in ChatGPT, are well-known examples – trained with hundreds of billions of parameters that model language.

Numerous applications are created with the ability to generate text:

STACKIT AI Model Serving makes this technology accessible to European companies – as a managed service with complete control over the model used, the training data and the content generated.

Tips for the successful use of LLMs with STACKIT

The productive use of LLMs requires more than just a powerful model. A well thought-out setup, clear rules for use and a secure technical environment are crucial. STACKIT AI Model Serving offers optimal framework conditions for this – but there are also a few points to consider on the user side.

1. make a targeted choice of model: Not every model is suitable for every use case. For simple chatbots, more compact models with lower resource requirements are sufficient. For demanding tasks such as legal text analysis or technical documentation, larger models with high linguistic competence in different languages are more suitable. STACKIT supports various open source models and allows you to import your own variants.

2. structure and test prompts: The quality of the output depends heavily on the prompt entered. Use targeted, precise formulations. Test different variants to achieve the optimum result. Few-shot learning”, i.e. the use of a few examples, can also significantly improve the quality of the results.

3. regulate security and access: Use the existing functions to restrict access. These include API tokens, role-based assignment of rights and integration into dedicated networks (VPCs). This ensures that only authorized applications and persons can access your models.

4. Plan and scale resources: Planning ahead is important, especially for large models and higher query volumes. STACKIT allows you to provide inference resources as required – with automatic scaling as the workload increases. The pay-per-use model enables transparent billing without minimum runtimes.

5. Design data protection and training responsibly: Design data protection and training responsibly: When training your own models, the information used must be carefully selected. Pay attention to the origin, structure and legal framework – especially in the learning context, where content from external areas is processed.

6. Use monitoring: STACKIT provides comprehensive monitoring functions to monitor usage, performance and system utilization. This allows you to identify bottlenecks or unusual activities at an early stage and take appropriate action.

Paying attention to these points lays the foundation for the successful and secure use of LLMs – and allows you to exploit the full potential of generative AI.

STACKIT – the right platform for LLMs

The large language models are changing the way companies process information, understand language and generate content. Whether automated text creation, intelligent chatbots or the analysis of unstructured data: LLMs offer a wide range of possible applications – in different areas, in different languages, for different tasks.

STACKIT AI Model Serving provides you with the right platform for this: GDPR-compliant, flexibly scalable and fully under European control. You benefit from a modern infrastructure that combines security, availability and control – and can use powerful generative models such as GPT-based systems productively at the same time.

Deployment is simple, efficient and integration-capable – with REST API and full control over the parameters used and learning processes. This allows you to use generative language models in a targeted manner, discover new possibilities and build productive systems in a short space of time. Whether a standard model or your own development: STACKIT provides the framework for successfully establishing artificial intelligence in language and text in your company.


FAQ: LLMs with STACKIT

What is a Large Language Model (LLM)?

An LLM is an AI system that is based on billions of parameters and is trained using machine learning. It can process information, create content and differentiate between language elements – in specific areas, including German.

How does STACKIT AI Model Serving work?

STACKIT provides you with a managed service that allows you to operate your own or pre-trained LLMs in a secure and scalable manner. You receive access via an API and can integrate the models into your applications – fully GDPR-compliant and operated in European data centers.

Which models can I use?

You can use open source models (such as models from the GPT family or Hugging Face) or upload your own models. STACKIT offers a flexible environment for different model types that are designed for text generation, language understanding or classification.

Is the use of LLMs on STACKIT safe?

Yes, STACKIT operates all services in ISO/IEC 27001-certified data centers in Germany and Austria. The highest security standards apply, including network isolation, encryption and role-based access control. Your data and models remain completely under your control.

What does it cost to operate an LLM with STACKIT?

Billing is based on the pay-per-use principle. You only pay for the resources you actually use – with no minimum term or upfront payment. The prices of AI Model Serving are based on the number of tokens processed (input and output) and the duration of use per hour. This allows you a flexible and cost-transparent entry into the world of generative language models.


STACKIT Support Headset

Please contact us for your

individual consulting

To the contact form