Skip to content

Fireworks AI

Fireworks AI inference provider using the OpenAI-compatible Chat Completions API.

Setup

bash
go get github.com/zendev-sh/goai@latest
go
import "github.com/zendev-sh/goai/provider/fireworks"

Set the FIREWORKS_API_KEY environment variable, or pass WithAPIKey() directly.

Models

  • accounts/fireworks/models/llama-v3p3-70b-instruct
  • accounts/fireworks/models/mixtral-8x7b-instruct
  • accounts/fireworks/models/qwen2p5-72b-instruct

Model IDs use the full accounts/fireworks/models/ prefix format.

Tested Models

Unit tested (mock HTTP server, 2026-03-15): accounts/fireworks/models/llama-v3p3-70b-instruct

Usage

go
model := fireworks.Chat("accounts/fireworks/models/llama-v3p3-70b-instruct")

result, err := goai.GenerateText(ctx, model, goai.WithPrompt("Hello"))
if err != nil {
    log.Fatal(err)
}
fmt.Println(result.Text)

Options

OptionTypeDescription
WithAPIKey(key)stringSet a static API key
WithTokenSource(ts)provider.TokenSourceSet a dynamic token source
WithBaseURL(url)stringOverride the default https://api.fireworks.ai/inference/v1 endpoint
WithHeaders(h)map[string]stringSet additional HTTP headers
WithHTTPClient(c)*http.ClientSet a custom *http.Client

Notes

  • Environment variable FIREWORKS_BASE_URL can override the default endpoint.

Released under the MIT License.