🚀 Kamka IT | Open-Source AI & Backend Engineering

Empowering the open-source community with robust ML pipelines, fine-tuned models, and agentic workflows.

Website Hugging Face Contact


🌍 About Us

Based in Tunisia, Kamka IT is a specialized consulting firm operating at the intersection of Advanced Backend Engineering and Artificial Intelligence. We build scalable, self-hosted architectures and intelligent agentic systems.

Beyond our enterprise consulting, we are deeply committed to the open-source ethos. Our Hugging Face organization is dedicated to sharing our internal research, fine-tuned models, and end-to-end pipelines with the global AI community.

🎯 Our Open-Source Mission

At Kamka IT, we believe that the future of AI lies in transparency, accessibility, and collaboration. Our open-source objectives on Hugging Face are:

  1. Developing Specialized Models: Releasing state-of-the-art weights fine-tuned for niche domains (such as Bioinformatics and Software Engineering).
  2. Open Pipelines: Sharing robust, reproducible training and inference pipelines to help developers integrate AI into their own self-hosted infrastructure.
  3. Advancing Agentic Workflows: Contributing models and datasets optimized for agentic frameworks like LangGraph and LiteLLM.

🔬 Featured Open-Source Contributions

🧬 Bioinformatics & Genomics Models

We have invested heavily in the intersection of LLMs and biological data.

📊 Datasets

High-quality models require high-quality data. We open-source our curation efforts to accelerate research.


🛠 Tech Stack & Expertise

Our models and pipelines are built using modern, scalable, and sovereign technologies:


💻 Using Our Pipelines

We design our models to be easily integrable into your existing workflows. Here is a quick example of how to load one of our text-generation models via the transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Kamka-IT/BioTATA-7B"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

prompt = "Analyze the following nucleotide sequence: "
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

(Note: For comprehensive pipeline tutorials, check the specific Model Cards!)


🤝 Let's Collaborate!

Whether you are a researcher looking to fine-tune a model, a developer building an agentic system, or a company seeking to deploy sovereign, self-hosted AI architecture, we would love to connect.