This is an n8n community sub-node that provides Google Gemini Embeddings with extended features, including support for output dimensions, task types, and special handling for models like gemini-embedding-001
.
- Support for any Google Gemini embedding model (specify by name)
- Output dimensions configuration (for supported models)
- Task type specification for optimized embeddings
- Title support for retrieval documents
- Batch size control for rate limit management
- Special handling for gemini-embedding-001 (single input per request)
- Uses standard Google API credentials (same as other Google AI nodes)
- Works as a sub-node with vector stores and other AI nodes
- In n8n, go to Settings > Community Nodes
- Search for
n8n-nodes-google-gemini-embeddings-extended
- Click Install
npm install n8n-nodes-google-gemini-embeddings-extended
- A Google AI Studio account
- A Gemini API key
This node uses the standard Google PaLM/Gemini API credentials:
- Get your API key from Google AI Studio
- In n8n, create Google PaLM API credentials
- Enter your API key
This is a sub-node that provides embeddings functionality to other n8n AI nodes.
- Add a vector store node to your workflow (e.g., Pinecone, Qdrant, Supabase Vector Store)
- Connect the Embeddings Google Gemini Extended node to the embeddings input of the vector store
- Configure your Google PaLM API credentials
- Enter your model name (e.g.,
text-embedding-004
,gemini-embedding-001
) - Configure additional options as needed
- The vector store will use these embeddings to process your documents
[Document Loader] → [Vector Store] ← [Embeddings Google Gemini Extended]
↓
[AI Agent/Chain]
Enter any valid Google Gemini embedding model name. Examples:
-
text-embedding-004
(Latest, supports output dimensions) -
gemini-embedding-001
(Supports output dimensions, processes one input at a time) -
embedding-001
(Legacy model)
For models that support it, you can specify the number of output dimensions:
- Set to
0
to use the model's default dimensions - Set to a specific number (e.g.,
256
,768
,3072
) to get embeddings of that size
Optimize your embeddings by specifying the task type:
- Retrieval Document: For document storage in retrieval systems
- Retrieval Query: For search queries
- Semantic Similarity: For comparing text similarity
- Classification: For text classification tasks
- Clustering: For grouping similar texts
- Question Answering: For Q&A systems
- Fact Verification: For fact-checking applications
- Code Retrieval Query: For code search
- Title: Add a title to documents (only for RETRIEVAL_DOCUMENT task type)
- Strip New Lines: Remove line breaks from input text (enabled by default)
- Batch Size: Control how many texts are processed at once (default: 100)
- Semantic Search: Generate embeddings for documents and queries in vector stores
- RAG Applications: Build retrieval-augmented generation systems
- Document Similarity: Find similar documents in your vector database
- Multi-language Support: Use models that support multiple languages
- Code Search: Use CODE_RETRIEVAL_QUERY for searching code repositories
This model has special requirements:
- Only accepts one text input per request
- The node automatically handles this limitation
- Processing may be slower for large datasets
- Supports output dimensions up to 3072
- Supports batch processing
- Default dimensions: 768
- Good balance of performance and quality
This community node extends the official Google Gemini Embeddings node with:
- Output Dimensions Support: Configure the size of embedding vectors
- Extended Task Types: More task type options for optimization
- Title Support: Add titles to documents for better retrieval
- Batch Size Control: Manage rate limits effectively
- Better Error Messages: More detailed error information
This embeddings node can be used with:
- Simple Vector Store
- Pinecone Vector Store
- Qdrant Vector Store
- Supabase Vector Store
- PGVector Vector Store
- Milvus Vector Store
- MongoDB Atlas Vector Store
- Zep Vector Store
- Question and Answer Chain
- AI Agent nodes
-
Authentication Errors
- Ensure your Google PaLM API key is valid
- Check that the API is enabled in your Google Cloud project
- Verify you have sufficient quota
-
Model Errors
- Verify the model name is spelled correctly
- Check Google's documentation for valid model names
-
Rate Limit Errors
- Reduce the batch size in options
- Add delays between requests if processing large datasets
-
Dimension Errors
- Not all models support custom dimensions
- Check model documentation for supported dimension values
-
Bad Request Errors
-
gemini-embedding-001
only accepts one input at a time (handled automatically) - Ensure text inputs are within token limits
-
Contributions are welcome! Please feel free to submit a Pull Request.
MIT
For issues and feature requests, please use the GitHub issue tracker.
- Version bump for republishing to ensure package visibility
- Updated all dependencies to latest versions
- Fixed TypeScript compatibility issues
- Updated ESLint configuration for ESLint 9.x
- Updated
@langchain/google-genai
from 0.0.23 to 0.2.10 - Updated
n8n-workflow
peer dependency to match current version (1.82.0) - Improved build stability and security
- Initial release
- Support for Google Gemini embeddings via API
- Output dimensions configuration
- Task type selection with extended options
- Title support for documents
- Batch size control
- Special handling for gemini-embedding-001