Pinecone logo

Pinecone Review

Visit

Vector database for similarity search across billions of items in milliseconds

Pinecone is a managed vector database for building AI-powered search, recommendation, and retrieval-augmented generation (RAG) applications at scale.

AI Panel Score

8.2/10

6 AI reviews

Reviewed

AI Editor Approved

About Pinecone

Developers use Pinecone by converting data into vector embeddings using embedding models, then upserting those vectors into Pinecone indexes via API or SDK. Queries are performed by submitting a vector (or text that Pinecone converts to a vector), which returns the most semantically similar results from the index. The database supports hybrid search combining dense and sparse vectors, metadata filtering, and namespace-based data partitioning—all accessible through REST or gRPC APIs and official SDKs for Python, JavaScript, Java, and Go.

Pinecone's architecture separates storage from compute and reads from writes, enabling it to scale to billions of vectors and thousands of queries per second without manual capacity planning. The platform includes built-in inference capabilities—sparse and dense embedding models, and reranking models—so developers can handle embedding and retrieval within Pinecone rather than stitching together separate services. Updates to indexes are made searchable within seconds, which the company describes as "built-in freshness" for dynamic datasets. Pinecone Assistant is a separate product that wraps the vector database with a document Q&A interface and citation highlighting.

Pinecone targets machine learning engineers, backend developers, and AI application teams building semantic search, RAG systems, recommendation engines, and AI agents. A free Starter tier exists with limited index capacity; paid plans are usage-based (serverless) or provisioned, with enterprise options including dedicated read nodes and a Bring Your Own Cloud (BYOC) deployment model. Competing products in the vector database category include Weaviate, Qdrant, Milvus, Chroma, and PostgreSQL with pgvector.

Pinecone is available as a fully managed cloud service on AWS, GCP, and Azure. It integrates with frameworks and platforms including LangChain, LlamaIndex, n8n, Databricks, Vercel, and SageMaker. The Pinecone API is accessible over HTTP, and the BYOC option allows organizations to run Pinecone within their own cloud accounts to meet data residency or compliance requirements.

Features

AI

  • Hybrid Search (Dense + Sparse Retrieval)

    Combines dense vector semantic search with sparse keyword-based retrieval in a single query, enabling more accurate and relevant results than either method alone.

  • Integrated Embedding & Reranking Models

    Hosts built-in embedding and reranking models (including Cohere Rerank 3.5) natively within the platform, so developers can generate vectors and reorder results without external inference services.

  • Pinecone Assistant (AI Q&A)

    A built-in AI assistant feature that enables developers to create agents capable of answering complex questions over proprietary data stored in Pinecone indexes.

Core

  • Full-Text Search (BM25)

    Supports a document schema index with BM25-ranked full-text search on string fields alongside dense and sparse vector fields under one unified schema.

  • Metadata Filtering

    Applies metadata filters inline during vector search in a single pass—no post-processing or added latency—allowing queries to combine semantic similarity with structured business rules.

  • Namespaces for Multitenancy

    Partitions records within a single index into isolated namespaces so each customer or agent gets its own context, enabling multitenancy without managing separate indexes.

  • Real-Time Indexing

    New vectors are acknowledged in under 100ms and become searchable within seconds, with no re-indexing jobs or pipelines required to incorporate new data.

  • Scalable Index Architecture (Billions of Vectors)

    Uses advanced indexing algorithms (IVF + product quantization) and distributed parallel processing to handle billions of vectors with auto-sharding, high recall, and low-latency queries at any scale.

  • Serverless Vector Database

    Fully managed, serverless architecture that automatically scales compute and storage independently based on workload demand, with no infrastructure provisioning or manual tuning required.

Integration

  • Multi-Language SDKs with Async & gRPC Support

    Provides official Python and Node.js SDKs with optional asyncio support for async frameworks and gRPC transport for improved upsert and query performance.

  • Native Ecosystem Integrations

    Offers native integrations with LangChain, LlamaIndex, Haystack, and agentic IDEs such as Claude Code, Gemini CLI, and Cursor for end-to-end AI pipeline development.

Security

  • Enterprise Security (RBAC, CMEK, SSO, Private Endpoints)

    Provides role-based access controls, customer-managed encryption keys, single sign-on via SAML 2.0, private networking via AWS PrivateLink, plus SOC 2 Type II, HIPAA, GDPR, and ISO 27001 certifications.

Preview

Pinecone desktop previewPinecone mobile preview

Pricing Plans

Starter

Free

For developers, students, and small projects exploring the platform or building proofs of concept with no financial commitment.

  • Up to 5 indexes (each with 100 namespaces)
  • 2 GB storage
  • 2M write units/month
  • 1M read units/month
  • Read units free (0 cost)
  • 5M tokens/month for popular embedding models (e.g., llama-text-embed-v2, multilingual-e5-large)
  • 500 requests/month for pinecone-rerank-v0 and bge-reranker-v2-m3 reranking
  • Limited to AWS us-east-1 region
  • 1 project with up to 2 users
  • Indexes paused after 3 weeks of inactivity
Popular

Standard

$50/monthly

For production applications needing flexibility and pay-as-you-go scaling beyond a $50/month minimum commitment.

  • $50/month minimum applied to usage; anything over is billed pay-as-you-go
  • Read units at $16/million
  • Serverless indexes with usage billed on read units, write units, and storage
  • Multiple cloud providers and regions supported
  • Prepaid credits available ($8,000–$25,000) for additional capacity
  • HIPAA compliance available as an add-on
  • Support via console tickets

Enterprise

$500/monthly

For larger organizations requiring compliance, private networking, encryption key management, and premium support.

  • $500/month minimum applied to usage; anything over is billed pay-as-you-go
  • Read units at $24/million
  • HIPAA compliance included (no add-on needed)
  • Private networking (AWS PrivateLink, GCP Private Service Connect, Azure Private Link)
  • Encryption key management
  • Prepaid credits available ($8,000–$25,000) for additional capacity
  • Higher-tier support with faster response times
  • Dedicated Read Nodes (DRN) available for high-QPS, low-latency workloads

Dedicated (BYOC)

Contact sales

For enterprises requiring Pinecone to run inside their own cloud account and VPC (Bring Your Own Cloud), with custom sizing and pricing.

  • Runs Pinecone in your own cloud account and VPC
  • No SSH, VPN, or inbound network access required by Pinecone
  • Supports AWS PrivateLink, GCP Private Service Connect, or Azure Private Link for private connectivity
  • Custom read unit pricing
  • Dedicated Read Nodes (DRN) for billion-vector-scale search
  • Custom support and deployment configuration
  • Contact Pinecone for sizing and pricing

AI Panel Reviews

The Decision Maker

The Decision Maker

Strategic bet, vendor viability, timing, adoption approval
8.6/10

Pinecone is the default vector database bet for teams shipping RAG in production.

Category leader with real enterprise credentials and a $50/month on-ramp that's easy to defend. The tradeoff is cost at scale versus open-source alternatives like Qdrant or Weaviate.

They've been shipping long enough to have SOC 2 Type II, HIPAA, GDPR, and ISO 27001 all checked. That's not a startup promise — that's a compliance posture that clears most enterprise procurement reviews without a fight. BYOC and AWS PrivateLink for the data-residency crowd seals it.

The integrated reranking — Cohere Rerank 3.5 baked in natively — means teams aren't stitching together three services to get a decent RAG pipeline. Hybrid search combining BM25 full-text with dense vectors in a single query is the architecture most teams eventually rebuild anyway. They've built it first.

Two things give me pause. One: read units at $24/million on Enterprise means high-QPS workloads get expensive fast — Qdrant self-hosted is a real conversation at that scale. Two: the Starter tier pauses indexes after 3 weeks of inactivity, which kills developer momentum. Still the category default. Pilot it at Standard.

Competitive Positioning8.2

Weaviate and Qdrant are credible alternatives, but Pinecone's managed serverless architecture and native reranking pull ahead for teams that don't want to run infrastructure.

Reputation Risk9.0

LangChain, LlamaIndex, Databricks, and Cursor all integrate natively — this is what the peer group is already referencing in architecture reviews.

Speed to Value8.5

Free Starter tier, sub-100ms write acknowledgment, and real-time indexing mean a working prototype in hours, not sprint cycles.

Strategic Fit9.0

If you're building RAG, agents, or semantic search, this is the infrastructure — not a cost-saving swap, a capability unlock.

Vendor Viability8.8

Long enough in market to hold SOC 2 Type II, HIPAA, and ISO 27001 — that's a team that's been operating under scrutiny, not just shipping demos.

Pros

  • Hybrid dense + sparse search with BM25 in a single query — no separate pipeline to maintain
  • Full enterprise compliance stack: SOC 2 Type II, HIPAA, GDPR, ISO 27001, PrivateLink
  • Native integrations with LangChain, LlamaIndex, Claude Code, and Cursor
  • $50/month Standard tier gives production-grade serverless with pay-as-you-go overflow

Cons

  • Enterprise read units at $24/million makes high-QPS workloads expensive versus self-hosted Qdrant
  • Starter tier pauses indexes after 3 weeks of inactivity — friction for developer adoption
  • No public funding data surfaced; runway visibility is limited

Right for

Engineering teams shipping RAG or semantic search in production who want managed infrastructure with enterprise compliance out of the box.

Avoid if

Your query volume is high enough that $24/million read units will dominate your bill and your team can run Kubernetes.

The Domain Strategist

The Domain Strategist

Craft and strategy in the product's domain — adapts identity per category, same lens
8.4/10

Purpose-built vector infrastructure that makes RAG pipelines a knowledge management problem, not an engineering one.

Pinecone is the managed vector database that serious AI knowledge applications are built on. Hybrid search, real-time indexing, and enterprise-grade compliance make it the default choice when retrieval accuracy and data governance both matter.

Hybrid dense-plus-sparse retrieval in a single query pass is the feature that separates Pinecone from most of the field. Weaviate and Qdrant both do vector search, but the built-in Cohere Rerank 3.5 integration plus BM25 full-text search under one unified schema means retrieval pipelines stay coherent rather than sprawling across three services. For knowledge management, that coherence is the whole game — fragmented retrieval architectures produce fragmented answers.

Namespace-based multitenancy and metadata filtering at query time, not post-retrieval, are exactly what enterprise knowledge bases require. SOC 2 Type II, HIPAA, ISO 27001, and the BYOC deployment option mean this can live inside your compliance perimeter, not adjacent to it. The $500/month Enterprise floor is real cost, but private networking and CMEK justify it for any regulated knowledge corpus.

The honest constraint: Pinecone is infrastructure, not a knowledge management platform. Pinecone Assistant is a wrapper, not a governance layer — there's no taxonomy management, no content lifecycle tooling, no contributor workflow. If your KM mandate includes curation and ownership, you're pairing this with something else.

Category Positioning8.5

Clear category leader in managed vector databases, ahead of self-hosted alternatives like Milvus and Chroma on operational maturity, with pgvector as the only real cost-competitive threat at smaller scale.

Domain Fit7.6

Excellent retrieval infrastructure fit, but no native knowledge governance, taxonomy, or lifecycle features — it's the engine, not the knowledge management system.

Integration Surface9.0

Native integrations with LangChain, LlamaIndex, Databricks, SageMaker, and agentic IDEs like Claude Code and Cursor cover the full modern AI development stack.

Long-term Implications8.2

Adoption creates a durable, scalable retrieval foundation, but the embedding and index schemas you build today become migration debt if you ever need to move; BYOC mitigates vendor lock-in concerns.

Strategic Depth8.8

IVF plus product quantization at billions of vectors, with storage-compute separation and sub-100ms write acknowledgment — someone has shipped production-scale retrieval systems before.

Pros

  • Hybrid search combining dense, sparse, and BM25 in a single query pass
  • Real-time indexing with writes searchable in seconds — no re-index jobs
  • Full enterprise compliance stack: SOC 2 Type II, HIPAA, ISO 27001, BYOC deployment
  • Namespace multitenancy handles multi-tenant knowledge bases without separate indexes

Cons

  • No knowledge governance, taxonomy, or content lifecycle tooling — retrieval only
  • Enterprise tier starts at $500/month minimum, meaningful cost floor for smaller teams
  • Index schema and embedding choices create migration friction if retrieval strategy evolves
  • Starter plan limits to AWS us-east-1 and pauses indexes after 3 weeks of inactivity

Right for

Engineering-led teams building RAG applications or semantic search over large, dynamic enterprise knowledge corpora with compliance requirements.

Avoid if

Your KM mandate is primarily content governance, taxonomy management, or contributor workflows rather than retrieval infrastructure.

The Finance Lead

The Finance Lead

Money, total cost of ownership, contracts, procurement math
7.8/10

$50 floor, usage-based above — but read-unit overages need a calculator

Pinecone publishes four tiers without a sales call. Usage-based billing above the $50 minimum means year-3 costs depend entirely on query volume.

Three tiers visible on the pricing page. Starter at $0, Standard at $50/month minimum, Enterprise at $500/month minimum. That's rare transparency for a category where Weaviate and Milvus often hide enterprise numbers. HIPAA is an add-on at Standard — included at Enterprise. That delta matters for healthcare buyers.

The TCO problem is read-unit pricing: $16/million at Standard, $24/million at Enterprise. A team running 50M reads/month hits $800/month at Standard before storage and write units. 50 users × $800 × 12 = $9.6K/year. Add 30% volume growth — year 3 is closer to $16K, just on reads. No published overage cap. That's the real number to pressure-test.

Contract flexibility is unclear from public materials — no published auto-renewal window or termination terms. Prepaid credits run $8K–$25K, which means cash commitment up front. BYOC is custom-priced with no public floor. Procurement will need a direct conversation there.

Billing & Procurement7.2

Serverless billing via read/write units is clean in theory; BYOC and HIPAA add-on require vendor contact, adding procurement friction.

Contract Flexibility6.0

No public auto-renewal window, termination terms, or cancellation process; prepaid credit tiers ($8K–$25K) add cash commitment risk.

Pricing Transparency8.2

Four tiers with unit prices published without a sales call — $16/million read units at Standard is specific and actionable.

ROI Clarity7.5

Latency under 100ms and real-time indexing are measurable; retrieval quality improvements in RAG pipelines are harder to attribute directly to Pinecone vs. embedding model choice.

Total Cost of Ownership6.8

Usage-based overages with no published cap create unpredictable year-3 invoices; storage and write units stack on top of read-unit costs.

Pros

  • Four tiers published without a sales call
  • SSO and RBAC included at Enterprise — no separate add-on tax
  • Real-time indexing under 100ms is a measurable SLA
  • HIPAA and SOC 2 Type II compliance documented

Cons

  • Read-unit pricing at $16–$24/million makes budgeting volatile at scale
  • HIPAA is a paid add-on at Standard tier — not standard
  • No public auto-renewal or cancellation terms
  • BYOC pricing requires a sales call — no public floor

Right for

AI engineering teams building RAG or semantic search on AWS/GCP/Azure who need managed infrastructure and can predict query volume.

Avoid if

Your query volume is unpredictable and you can't absorb open-ended monthly overages without a cost ceiling.

The Domain Practitioner

The Domain Practitioner

Daily hands-on reality in the product's domain — adapts identity per category, same lens
8.4/10

Pinecone is the managed vector database RAG pipelines actually want to live in.

Hybrid dense+sparse search plus built-in reranking means fewer stitched-together services in your retrieval pipeline. The $50/month Standard floor is honest: this is production infrastructure, not a toy.

Real-time indexing — under 100ms acknowledgment, searchable within seconds — matters enormously for dynamic corpora. Literature databases, news feeds, document repositories with daily ingestion: no re-index jobs means your retrieval stays current without cron scaffolding. That's not a demo feature. That's a daily workflow requirement, and Pinecone satisfies it natively.

The integrated Cohere Rerank 3.5 and BM25 full-text search under one unified schema are the real differentiation against Weaviate or self-hosted Qdrant. Hybrid search without a sidecar keyword engine is genuinely fewer moving parts. Namespace-based multitenancy also means one index can partition across research projects or user groups without separate infrastructure. The Starter tier's index pause after 3 weeks of inactivity will sting researchers with irregular workflows.

Docs appear practitioner-authored — API-first, code-heavy, framework-specific examples for LangChain and LlamaIndex. The tradeoff: Standard's $16/million read units scales fine for moderate QPS, but high-query research workloads could see costs compound quickly without Dedicated Read Nodes, which live behind the $500/month Enterprise tier.

Day-3 Reality8.2

Serverless auto-scaling and real-time indexing remove most operational friction; the Starter tier's inactivity pause is the one daily annoyance for irregular research use.

Documentation Practitioner-Fit8.5

Docs are API-first with framework-specific code examples, suggesting they were written by people who actually build retrieval pipelines.

Friction Surface7.9

Single-pass metadata filtering and built-in embedding models eliminate external service calls, but the 500 reranking requests/month on Starter will exhaust quickly in active retrieval testing.

Power-User Depth8.3

IVF + product quantization internals, gRPC transport, BYOC deployment, and Dedicated Read Nodes give serious practitioners genuine depth to grow into.

Workflow Integration8.6

Native LangChain, LlamaIndex, and Haystack integrations plus Python asyncio support means Pinecone drops into existing RAG and agent pipelines without new habits.

Pros

  • Hybrid dense+sparse search with BM25 in one unified schema — no sidecar keyword engine
  • Real-time indexing with sub-100ms acknowledgment handles dynamic corpora without re-index pipelines
  • Built-in Cohere Rerank 3.5 eliminates a separate inference service call
  • BYOC and SOC 2 Type II + HIPAA compliance cover institutional data requirements

Cons

  • Starter tier pauses indexes after 3 weeks of inactivity — punishes researchers with irregular cadences
  • 500 reranking requests/month on free tier exhausts fast during active retrieval experimentation
  • Dedicated Read Nodes for billion-vector-scale workloads require the $500/month Enterprise floor
  • Read unit pricing at $16–24/million can compound unexpectedly on high-QPS research queries

Right for

ML researchers and AI engineers building RAG systems or semantic search over large, frequently-updated corpora who want managed infrastructure with no vector DB ops burden.

Avoid if

You're running occasional, low-frequency retrieval experiments where self-hosted Qdrant or pgvector on an existing Postgres instance covers the need at zero marginal cost.

The Power User

The Power User

Daily human experience, onboarding, polish, learning curve, reliability
8.2/10

Best-in-class managed vector DB, but you'll need an engineer to feel at home

Pinecone is the category default for teams building RAG and semantic search at scale. It's infrastructure, not an app — and it knows it.

The $50 Standard plan is pay-as-you-go past the minimum, serverless, and honestly pretty generous for most production workloads. Real-time indexing under 100ms, hybrid dense-and-sparse search, built-in Cohere Rerank 3.5 — this is a lot of retrieval plumbing that you don't have to wire together yourself. Versus something like Qdrant or Weaviate, Pinecone's managed offering means zero infra babysitting. That's the pitch, and the pitch holds.

The daily experience, though, is API-first and proud of it. The console exists. It's functional. But if you're not comfortable in Python or Node, you'll feel it by day three. The free Starter tier pauses indexes after three weeks of inactivity — which will catch someone off guard at least once.

Mobile is basically nothing, which is fine because nobody's querying a vector database from their phone. Learning curve is real but the LangChain and LlamaIndex integrations mean most devs land in familiar territory fast. Not for no-code teams. Very much for teams who want to stop thinking about infrastructure.

Daily Polish7.5

Docs are solid and the SDK experience feels considered, but the console is functional-not-beautiful — two different levels of care depending on whether you're in code or the UI.

Learning Curve7.2

Native integrations with LangChain and LlamaIndex flatten the curve for developers, but namespaces, hybrid search tuning, and provisioned vs. serverless index decisions take real time to internalize.

Mobile Parity3.5

Web-only platform with an API-first design; mobile is not a real consideration here and the product doesn't pretend otherwise.

Onboarding Experience7.8

Free Starter index with 2GB storage and 5M embedding tokens gets you running fast, but the index-pausing-after-3-weeks behavior is a quiet gotcha for new users.

Reliability Feel8.5

SOC 2 Type II, HIPAA, ISO 27001, sub-100ms write acknowledgment — the reliability story is documented and credible for a managed service.

Pros

  • Hybrid search combining dense and sparse vectors in a single query — no separate pipelines
  • Built-in embedding and reranking models including Cohere Rerank 3.5, so fewer external services to manage
  • Real-time indexing with writes searchable in seconds, no re-index jobs
  • BYOC deployment option for data residency requirements at enterprise scale

Cons

  • Starter tier pauses indexes after 3 weeks of inactivity — will catch new users by surprise
  • Console UX is adequate but not polished; this is an API product wearing a UI
  • Mobile is read-only at best, nonexistent in practice
  • Enterprise plan starts at $500/month minimum with read units priced at $24/million — cost scales fast at high QPS

Right for

ML engineers and backend dev teams building RAG pipelines or semantic search who want managed infrastructure and don't want to think about scaling.

Avoid if

You're a no-code builder or small team without developer resources who expected a point-and-click search tool.

The Skeptic

The Skeptic

Contrarian. Watch-outs, deal-breakers, broken promises, category patterns
7.8/10

Three green flags, one expensive exit, and a crowded graveyard behind it

Pinecone is the category's oldest managed play — real compliance stack, real scale story, real integrations. The risk isn't capability. It's what Qdrant and pgvector are doing for free.

Five named compliance certs. BYOC. PrivateLink on three clouds. That's not startup marketing — that's a product that's been through enterprise procurement cycles and survived. The $50 Standard floor is honest: usage-based pricing with a visible rate card ($16/million read units) is grounded. No hidden tiers I can see in the docs.

The differentiation story is shakier. Hybrid search plus BM25 plus reranking in one platform is genuinely useful. But Weaviate ships that too. Qdrant is self-hostable and free. pgvector is already in the Postgres most teams already run. Pinecone's moat is managed convenience — real, but not permanent.

Exit portability is the soft spot. Vectors are portable. But Pinecone Assistant, namespace logic, and the proprietary embedding endpoints create enough coupling that migration isn't trivial. Based on the architecture, you'd lose inference convenience and need to rebuild pipelines. Plan for that day one.

Competitive Differentiation6.8

Hybrid search plus reranking plus BM25 in one API is a real edge over bare pgvector, but Weaviate and Qdrant close that gap fast and cost less at low scale.

Exit Portability5.5

Raw vectors are portable, but Pinecone Assistant, built-in inference endpoints, and namespace-based multitenancy create real migration friction if you need to leave.

Long-term Viability8.0

SOC 2 Type II, HIPAA, ISO 27001, BYOC, and enterprise-tier pricing suggest a company that's been through real deals — no public funding data visible, but the compliance investment signals staying power.

Marketing Honesty8.2

Tagline matches the actual product — no AI buzzword inflation, and the pricing page shows a real rate card rather than 'contact us' opacity.

Track Record Match8.0

Follows the Snowflake managed-infrastructure playbook: separate storage from compute, sell convenience to ops-averse teams — that pattern has worked durably in adjacent categories.

Pros

  • Full compliance stack (SOC 2 Type II, HIPAA, ISO 27001) included or available at $500/month — not bolted-on paperwork
  • Real-time indexing under 100ms acknowledgment with no re-index jobs is operationally meaningful for dynamic datasets
  • Hybrid dense+sparse+BM25+reranking in one API genuinely reduces pipeline complexity vs. stitching Chroma plus a reranker
  • BYOC on AWS, GCP, and Azure covers most data residency objections without moving off Pinecone entirely

Cons

  • Qdrant and pgvector are free and self-hostable — Pinecone's value prop thins fast when infra teams get involved
  • Starter tier pauses indexes after 3 weeks of inactivity — an annoying gotcha for prototype-then-return workflows
  • Exit portability weakens the more you adopt: Pinecone Assistant and built-in inference create lock-in that plain vector storage doesn't
  • No changelog visible in scraped capabilities — hard to verify shipping cadence from public signals alone

Right for

ML engineers and backend teams building RAG or semantic search who want zero infrastructure management and enterprise compliance without hiring a DBA.

Avoid if

Your team runs Postgres already and can tolerate pgvector's limitations — the cost delta won't justify the convenience.

Buyer Questions

Common questions answered by our AI research team

Security

Is Pinecone HIPAA and SOC 2 compliant?

Yes, Pinecone is SOC 2 Type II, HIPAA, GDPR, and ISO 27001 certified.

Features

How fast are writes searchable after uploading?

Writes are acknowledged in under 100ms and become searchable within seconds — no re-index jobs or pipelines required.

Pricing

Can I start building for free?

Yes, you can create your first index for free, then pay as you go when ready to scale.

Security

Does Pinecone support private networking?

Yes, private networking is included as part of Pinecone's enterprise security features, alongside encryption, SSO, RBAC, and CMEK.

Integration

Does Pinecone work with Claude and Cursor?

Yes, Claude and Cursor are both shown as supported integrations on the homepage, along with Copilot, Codex, Gemini, and CLI.

Also in AI Search & Knowledge