xfalcox 6 hours ago

> Nobody’s actually run this in production

We do at Discourse, in thousands of databases, and it's leveraged in most of the billions of page views we serve.

> Pre- vs. Post-Filtering (or: why you need to become a query planner expert)

This was fixed in version 0.8.0 via Iterative Scans (https://github.com/pgvector/pgvector?tab=readme-ov-file#iter...)

> Just use a real vector database

If you are running a single service that may be an easier sell, but it's not a silver bullet.

  • xfalcox 5 hours ago

    Also worth mentioning that we use quantization extensively:

    - halfvec (16bit float) for storage - bit (binary vectors) for indexes

    Which makes the storage cost and on-going performance good enough that we could enable this in all our hosting.

    • simonw an hour ago

      It still amazes me that the binary trick works.

      For anyone who hasn't seen it yet: it turns out many embedding vectors of e.g. 1024 floating point numbers can be reduced to a single bit per value that records if it's higher or lower than 0... and in this reduced form much of the embedding math still works!

      This means you can e.g. filter to the top 100 using extremely memory efficient and fast bit vectors, then run a more expensive distance calculation against those top 100 with the full floating point vectors to pick the top 10.

      • FuckButtons 13 minutes ago

        why is this amazing, it’s just a 1 bit lossy compression representation of the original information? If you have a vector in n-dimensional space this is effectively just representing the basis vectors that the original has.

    • summarity 4 hours ago

      That's where it's at. I'm using the 1600D vectors from OpenAI models for findsight.ai, stored SuperBit-quantized. Even without fancy indexing, a full scan (1 search vector -> 5M stored vectors), takes less than 40ms. And with basic binning, it's nearly instant.

      • tacoooooooo 4 hours ago

        this is at the expense of precision/recall though isn't it?

        • pclmulqdq 2 hours ago

          Approximate nearest neighbor searches don't cost precision. Just recall.

        • summarity 3 hours ago

          With the quant size I'm using, recall is >95%.

  • tacoooooooo 4 hours ago

    for sure people are running pgvector in prd! i was more pointing at every tutorial

    iterative scans are more of a bandaid for filtering than a solution. you will still run into issues with highly restrictive filters. you still need to understand ef_search and max_search_tuples. strict vs relaxed ordering, etc. it's an improvement for sure, but the planner still doesn't deeply understand the cost model of filtered vector search

    there isn't a general solution to the pre- vs post-filter problem—it comes down to having a smart planner that understands your data distribution. question is whether you have the resources to build and tune that yourself or want to offload it to a service that's able to focus on it directly

    • cortesoft 4 hours ago

      I feel like this is more of a general critique about technology writing; there are always a lot of “getting started” tutorials for things, but there is a dearth of “how to actually use this thing in anger” documentation.

  • dpflan 4 hours ago

    What are you using it for? Is it part of a hybrid search system (keyword + vector)?

    • xfalcox 3 hours ago

      In Discourse embeddings power:

      - Related Topics, a list of topics to read next, which uses embeddings of the current topic as the key to search for similar ones

      - Suggesting tags and categories when composing a new topic

      - Augmented search

      - RAG for uploaded files

VoVAllen 6 hours ago

We at https://github.com/tensorchord/VectorChord solved most of the pgvector issues mentioned in this blog:

- We're IVF + quantization, can support 15x more updates per second comparing to pgvector's HNSW. Insert or delete an element in a posting list is a super light operation comparing to modify a graph (HNSW)

- Our main branch can now index 100M 768-dim vector in 20min with 16vcpu and 32G memory. This enables user to index/reindex in a very efficient way. We'll have a detailed blog about this soon. The core idea is KMeans is just a description of the distribution, so we can do lots of approximation here to accelerate the process.

- For reindex, actually postgres support `CREATE INDEX CONCURRENTLY` or `REINDEX CONCURRENTLY`. User won't experience any data loss or inconsistency during the whole process.

- We support both pre-filtering and post-filtering. Check https://blog.vectorchord.ai/vectorchord-04-faster-postgresql...

- We support hybrid search with BM25 through https://github.com/tensorchord/VectorChord-bm25

The author simplifies the complexity of synchronizing between an existing database and a specialized vector database, as well as how to perform joint queries on them. This is also why we see most users choosing vector solution on PostgreSQL.

  • nostrebored 6 hours ago

    So you’re quantizing and using IVF — what are your recall numbers with actual use cases?

    • VoVAllen 3 hours ago

      We do have some benchmark number at https://blog.vectorchord.ai/vector-search-over-postgresql-a-.... It varies on different dataset, but most cases it's 2x or more QPS comparing to pgvector's hnsw at same recall.

      • nostrebored an hour ago

        Your graphs are measuring accuracy [1] (i'm assuming precision?), not recall? My impression is that your approach would miss surfacing potentially relevant candidates, because that is the tradeoff IVF makes for memory optimization. I'd expect that this especially struggles with high dim vectors and large datasets.

        [1] https://cdn.hashnode.com/res/hashnode/image/upload/v17434120...

        • VoVAllen an hour ago

          It's recall. Thanks for pointing out this, we'll update the diagram.

          The core part is a quantization technique called RaBitQ. We can scan over the bit vector to have an estimation about the real distance between query and data. I'm not sure what do you mean by "miss" here. As the approximate nearest neighbor index, all the index including HNSW will miss some potential candidates.

  • tacoooooooo 5 hours ago

    We actually looked into vectorchord--it looks really cool, but it's not supported by RDS so it is an additional service for us to add anyways.

sgarland 6 hours ago

> The problem is that index builds are memory-intensive operations, and Postgres doesn’t have a great way to throttle them.

maintenance_work_mem begs to differ.

> You rebuild the index periodically to fix this, but during the rebuild (which can take hours for large datasets), what do you do with new inserts? Queue them? Write to a separate unindexed table and merge later?

You use REINDEX CONCURRENTLY.

> But updating an HNSW graph isn’t free—you’re traversing the graph to find the right place to insert the new node and updating connections.

How do you think a B+tree gets updated?

This entire post reads like the author didn’t read Postgres’ docs, and is now upset at the poor DX/UX.

  • ayende 5 hours ago

    > maintenance_work_mem

    That kills the indexing process, you cannot let it run with limited amount of memory.

    > How do you think a B+tree gets updated?

    In a B+Tree, you need to touch log H of the pages. In HNSW graph - you need to touch literally thousands of vectors once your graph gets big enough.

    • sgarland an hour ago

      > That kills the indexing process, you cannot let it run with limited amount of memory.

      Considering the default value is 64 MB, it’s already throttled quite a bit.

  • tacoooooooo 5 hours ago

    some fair points points on the specifics.

    > maintenance_work_mem

    sure, but the knob existing doesn't solve the operational challenge of safely allocating GBs of RAM on prod for hours-long index builds.

    > REINDEX CONCURRENTLY

    this is still not free not free—takes longer, needs 2-3x disk space, and still impacts performance.

    > HNSW vs B+tree

    it's not that graph updates are uniquely expensive. vector workloads have different characteristics than traditional OLTP, and pg wasn't originally designed for them

    my broader point: these features exist, but using them correctly requires significant Postgres expertise. my thesis isn't "Postgres lacks features"—it's "most teams underestimate the operational complexity." dedicated vector DBs handle this automatically, and are often going to be much cheaper than the dev time put into maintaining pgvector (esp. for a small team)

    • sgarland an hour ago

      > sure, but the knob existing doesn't solve the operational challenge of safely allocating GBs of RAM on prod for hours-long index builds.

      How does it not? You should know the amount of freeable memory your DB has, and a rough idea of peak requirements. Give the index build some amount below that.

      > this is still not free not free—takes longer, needs 2-3x disk space, and still impacts performance.

      Yes, those are the trade-offs for not locking the table during the entire build. They’re generally considered acceptable.

      > it's "most teams underestimate the operational complexity.

      Agreed, which is why I don’t think dev teams should be running DBs if they lack expertise. Managed solutions (for Postgres; no idea on Pinecone et al.) only remove backup and failover complexity; tuning various parameters and understanding the optimizer’s decisions are still wholly on the human. RDBMS are some of the most complicated pieces of software that exist, and it’s absurd that the hyperscalers pretend that they aren’t.

alanwli 6 hours ago

I've seen a decent amount of production use of pgvector HNSW from our customers on GCP, but as the author noted is not without some flaws and are typically in the smallish range (0-10M vectors) for the systems characteristics that he pointed out - i.e. build times, memory use. The tradeoffs to consider are whether you want to ETL data into yet another system and deal with operational overhead, eventual consistency, application-logic to join vector search with the rest of your operational data. Whether the tradeoffs are worth it really depends on your business requirements.

And if one needs the transactional/consistency semantics, hybrid/filtered-search, low latencies, etc - consider a SOTA Postgres system like AlloyDB with AlloyDB ScaNN which has better scaling/performance (1B+ vectors), enhanced query optimization (adaptive pre-/post-/in-filtering), and improved index operations.

Full disclosure: I founded ScaNN in GCP databases and currently lead AlloyDB Semantic Search. And all these opinions are my own.

  • riku_iki 3 hours ago

    AlloyDb is not opensource, so it is kinda different niche.

jjfoooo4 5 hours ago

When using vectors / embeddings models, I think there's a lot of low hanging fruit to be had with non-massive datasets - your support documentation, your product info, a lot of search use cases. For these, the interface I really want is more like a file system than a database - I want to be able to just write and update documents like a file system and have the indexes update automatically and invisibly.

So basically, I'd love to have my storage provider give me a vector search API, which I guess is what Amazon S3 vectors is supposed to be (https://aws.amazon.com/s3/features/vectors/)?

Curious to hear what experience people have had with this.

clickety_clack 6 hours ago

My default is basically YAGNI. You should use as few services as possible, and only add something new when there’s issues. If everything is possible in Postgres, great! If not, at least I’ll know exactly what I need from the New Thing.

  • Fripplebubby 6 hours ago

    The post is a clear example of when YAGNI backfires, because you think YAGNI but then, you actually do need it. I had this experience, the author had this experience, you might as well - the things you think you AGN are actually pretty basic expectations and not luxuries: being able to write vectors real-time without having to run other processes out of band to keep the recall from degrading over time, being able to write a query that uses normal SQL filter predicates and similarity in one go for retrieval. These things matter and you won't notice that they actually don't work at scale until later on!

    • simonw 4 hours ago

      That's not YAGNI backfiring.

      The point of YAGNI is that you shouldn't over-engineer up front until you've proven that you need the added complexity.

      If you need vector search against 100,000 vectors and you already have PostgreSQL then pgvector is a great YAGNI solution.

      10 million vectors that are changing constantly? Do a bit more research into alternative solutions.

      But don't go integrating a separate vector database for 100,000 vectors on the assumption that you'll need it later.

      • Fripplebubby 2 hours ago

        I think the tricky thing here is that the specific things I referred to (real time writes and pushing SQL predicates into your similarity search) work fine at small scale in such a way that you might not actually notice that they're going to stop working at scale. When you have 100,000 vectors, you can write these SQL predicates (return the 5 top hits where category = x and feature = y) and they'll work fine up until one day it doesn't work fine anymore because the vector space has gotten large. So, I suppose it is fair to say this isn't YAGNI backfiring, this is me not recognizing the shape of the problem to come and not recognizing that I do, in fact, need it (to me that feels a lot like YAGNI backfiring, because I didn't think I needed it, but suddenly I do)

        • morshu9001 an hour ago

          If the consequence of being wrong about the scalability is that you just have to migrate later instead of sooner, that's a win for YAGNI. It's only a loss if hitting this limit later causes service disruption or makes the migration way harder than if you'd done it sooner.

          • simonw an hour ago

            And honestly, even then YAGNI might still win.

            There's a big opportunity cost involved in optimizing prematurely. 9/10 times you're wasting your time, and you may have found product-market fit faster if you had spent that time trying out other feature ideas instead.

            If you hit a point where you have to do a painful migration because your product is succeeding that's a point to be celebrated in my opinion. You might never have got there if you'd spent more time on optimistic scaling work and less time iterating towards the right set of features.

            • Fripplebubby an hour ago

              I think I see this point now. I thought of YAGNI as, "don't ever over-engineer because you get it wrong a lot of the time" but really, "don't over-engineer out of the gate and be thankful if you get a chance to come back and do it right later". That fits my case exactly, and that's what we did (and it wasn't actually that painful to migrate).

              • simonw an hour ago

                Yeah, that's a great way of putting it.

            • morshu9001 25 minutes ago

              Yeah the "only if" is more like a "necessary, not sufficient." The future migration pain had better be extremely bad to worry about it so far in advance.

              Or it should be a well defined problem. It's easier to determine the right solution after you've already encountered the problem, maybe in a past project. If you're unsure, just keep your options open.

        • hobofan an hour ago

          > When you have 100,000 vectors [...] and they'll work fine

          So 95% of use-cases.

    • throwway120385 3 hours ago

      Many of the concerns in the article could be addressed by standing up a separate PG database that's used exclusively for vector ops and then not using it for your relational data. Then your vector use cases get served from your vector DB and your relational use cases get served from your relational DB. Separating concerns like that doesn't solve the underlying concern but it limits the blast radius so you can operate in a degraded state instead of falling over completely.

      • SoftTalker 2 hours ago

        I've always tried to separate transactional databases from those supporting analytical queries if there's going to be any question that there might be contention. The latter often don't need to be real-time or even near-time.

  • esafak 6 hours ago

    Databases are hard to swap out when you realize you need a different one.

    • morshu9001 an hour ago

      That's true when you're talking about a generalized rdbms, but if this is an isolated set of tables for embeddings or something and you don't entangle it with everything else, it can be fine. See also, using Postgres as a KV store.

rudderdev 6 hours ago

As others have commented, all the mentioned issues are resolved, I will favour in using the PGVector. If Postgres can be a good choice over Kafka to deliver 100k events/sec [1], then why not PGVector over Chroma or other specialized vector search (unless there is a specific requirement that can't be solved wit minor code/config changes)!

[1] Ref: https://news.ycombinator.com/item?id=44659678

  • tacoooooooo 5 hours ago

    how are all of the mentioned issues resolved?

bob1029 2 hours ago

I'm still stuck on whether or not vector search (regardless of vendor) is actually the right way to solve the kinds of problems that everyone seems to believe it's great at.

BM25 with query rewriting & expansion can do a lot of heavy lifting if you invest any time at all in configuring things to match your problem space. The article touches on FTS engines and hybrid approaches, but I would start there. Figure out where lexical techniques actually break down and then reach for the "semantic" technology. I'd argue that an LLM in front of a traditional lexical search engine (i.e., tool use) would generally be more powerful than a sloppy semantic vector space or a fine tuning job. It would also be significantly easier to trace and shape retrieval behavior.

Lucene is often all you need. They've recently added vector search capabilities if you think you really need some kind of hybrid abomination.

  • mhuffman an hour ago

    I like lucene and have used it for many years, but sometimes a conceptually close match is what you want. Lucene and friends are fantastic about word matching, fuzzy searches, stem searches, phonetic searches, faceting and more but have nothing for conceptually or semantically close searches (I understand that they recently added new document vector searches). Also vector searches usually always return something which is not ideal in a lot of cases. I like Reciprocal Rank Fusion myself as it gives the best of both worlds. As a fun trick I use duckdb to do RRF with 5million+ documents and get low double-digit ms response time even under load

antirez 3 hours ago

Redis Vector Sets, my work for the last year, I believe address many of such points:

1. Updates: I wrote my own implementation of the HNSW with many changes compared to the paper. The result is that the data structure can be updated while it receives queries, like the other Redis data types. You add vectors with VADD, query for similarity with VSIM, delete with VREM. Also deleting vectors will not perform just a thumbstone deletion. The memory is actually reclaimed immediately.

2. Speed: The implementation is fast, fully threaded reads, partially threaded writes: even for insertion it is easy to stay in the few hundreds of ops/sec, and querying with VSIM is like 50k ops/sec in normal hardware.

3. Trivial: You can reimplement your use case in 10 minutes including learing how it works.

Of course it costs some memory, but less than you may guess: it supports quantization by default, transparently, and for a few millions of elements (most use cases) the memory usage is very low, totally affordable.

Bonus point: if you use vector sets you can ask my help for free. At this stage I support people using vector sets directly.

I'll link here the documentation I wrote myself as it is a bit hard to find, you know... a README inside the repository , in 2025, so odd: https://github.com/redis/redis/blob/unstable/modules/vector-...

P.S. in the README there is stale mention about replication code being not really tested. I filled the gap later and added tests, fixed bugs and so forth.

IntrepidPig 4 hours ago

> Post-filter works when your filter is permissive. Here’s where it breaks: imagine you ask for 10 results with LIMIT 10. pgvector finds the 10 nearest neighbors, then applies your filter. Only 3 of those 10 are published. You get 3 results back, even though there might be hundreds of relevant published documents slightly further away in the embedding space.

Is this really how it works? That seems like it’s returning an incorrect result.

jeffchuber 6 hours ago

Good article - the most use cases i see of pg_vector are typically “chat over their technical docs” - small corpus - doesn’t change often / can rebuild the index - no multi-tenancy avoids much of the issues with post-filtering

Chroma implements SPANN and SPFresh (to avoid the limitations of HNSW), pre-filtering, hybrid search, and has a 100% usage-based tier (many bills are around $1 per month).

Chroma is also apache 2.0 - fully open source.

chandureddyvari 4 hours ago

Is there a comprehensive leaderboard like ClickBench but for vector DBs? Something that measures both the qualitative (precision/recall) and quantitative aspects (query perf at 95th/99th percentile, QPS at load, compression ratios, etc.)?

ANN-Benchmark exists but it’s algorithm-focused rather than full-stack database testing, so it doesn’t capture real-world ops like concurrent writes, filtering, or resource management under load.

Would be great to see something more comprehensive and vendor-neutral emerge, especially testing things like: tail latencies under concurrent load, index build times vs quality tradeoffs, memory/disk usage, and behavior during failures/recovery

epolanski 5 hours ago

Curious if the author tried the new Redis module that brings HNSW vector search to redis.

From what I've seen is fast, has excellent API, and is implemented by a brilliant engineer in the space (Antirez).

But not using these things beyond local tests, I can never really hold opinions over those using these systems in production.

  • antirez 3 hours ago

    It's not a module, it is part of every new Redis version now. Well, actually: it is written in the form of a module and with the modules API in order to improve modularity of the Redis internals, but it is a "merged module", a new implementation/concept I implemented in Redis exactly to support the Vector Sets use case. Thank you for mentioning this.

  • mkesper 5 hours ago

    It's fast...because everything needs to be in memory. Expect astronomical cloud costs even for mid-sized data requirements.

    • epolanski 3 hours ago

      I don't know what mid-sized data requirement is or how this is used in prod, but I have huge doubts that if performance is the need cost is the problem.

      Especially in the AI and startup space.

dangoodmanUT 4 hours ago

> What bothers me most: the majority of content about pgvector reads like it was written by someone who spun up a local Postgres instance, inserted 10,000 vectors, ran a few queries, and called it a day.

I this taste with most posts about Postgres that don’t come from “how we scaled Postgres to X”. It seems a lot of writers are trying to ride the wave of popularity, creating a ton of noise that can end up as tech debt for readers

  • SoftTalker an hour ago

    AI + Docker has made it really easy to set up trivial demo systems and write an article about it.

pqdbr 4 hours ago

Id love to read a blog post like this about S3 Vector buckets. Does anyone have experience with it in production?

softwaredoug 4 hours ago

My real icky feeling is the layering on of postgres plugins to get a search solution to work.

Ok yeah there's PGVector. Then you need something to do full text search. And if you put all that together, you have a complex Postgres deployment.

It seems to make sense for simple operations, but I'd rather just get a search engine / vector database, than try to twist Postgres's arm into a weird setup.

  • riku_iki 3 hours ago

    > do full text search. And if you put all that together, you have a complex Postgres deployment.

    search is also just extension? So, its a strong point: you have one self-contained server with simple installation/maintenance story.

simonw 4 hours ago

"HNSW index on a few million vectors can consume 10+ GB of RAM or more (depending on your vector dimensions and dataset size). On your production database. While it’s running. For potentially hours."

How hard is it to move that process to another machine? Could you grab a dump of the relevant data, spin up a cloud instance with 16GB of RAM to build the index and then cheaply copy the results back to production when it finishes?

  • tacoooooooo 4 hours ago

    i discuss that specifically!

    > The problem is that index builds are memory-intensive operations, and Postgres doesn’t have a great way to throttle them. You’re essentially asking your production database to allocate multiple (possibly dozens) gigabytes of RAM for an operation that might take hours, while continuing to serve queries.

    > You end up with strategies like:

        Write to a staging table, build the index offline, then swap it in (but now you have a window where searches miss new data)
        Maintain two indexes and write to both (double the memory, double the update cost)
        Build indexes on replicas and promote them
        Accept eventual consistency (users upload documents that aren’t searchable for N minutes)
        Provision significantly more RAM than your “working set” would suggest
    
    > None of these are “wrong” exactly. But they’re all workarounds for the fact that pgvector wasn’t really designed for high-velocity real-time ingestion.

    short answer--maybe not that _hard_, but it adds a lot of complexity to manage when you're trying to offer real-time search. most vector DB solutions offer this ootb. This post is meant to just point out the tradeoffs with pgvector (that most posts seem to skip over)

    • the_mitsuhiko 4 hours ago

      > short answer--maybe not that _hard_, but it adds a lot of complexity to manage when you're trying to offer real-time search. most vector DB solutions offer this ootb. This post is meant to just point out the tradeoffs with pgvector (that most posts seem to skip over)

      Question is if that tradeoff is more or less complexity than maintaining a whole separate vector store.

eigencoder 5 hours ago

I think these are the salient concerns I've faced at work using pgvector. Especially getting bit by the query planning when filtering -- it's hard to predict when postgres will decide to use pre- vs post-filtering.

As for inserts being difficult, we basically don't see that because we only update the vector store weekly. We're not trying to index rapidly-changing user data, so that's not a big deal for our use case.

machiaweliczny 4 hours ago

Is there a way to do hybrid search that combines vector similarity with scalars fast using pg_vector? Or do I need to migrate to other tool?

arunmu 5 hours ago

There is pgvectorscale from timescale which uses disk ann based data structure and has support for pre and post filtering.

  • tacoooooooo 5 hours ago

    I mention this towards the end of the post. it looks like a good solution, but it's not available on RDS

    • akulkarni 3 hours ago

      pgvectorscale is 100% open source

      please ask your RDS rep to support it

      we (tiger data) are also happy to help push that along if we can help

gerardatkonvo 6 hours ago

Another thing is that consolidation means that you can less granularly scale. If suddenly vector searching becomes the bottleneck of your app you can't scale just the vector side of things.

cpursley 6 hours ago

Yeah, but just like all other bolt-on databases, now your vital data/biz logic is disconnected from the hot new VC database of the month's logic and you have to write balls of mud to connect it all. That's a very big tradeoff (logic, operations, etc).

Furthermore, when all the hipster vector database die or go into maintenance mode or get the license rug-pull when the investors come looking for revenue, postgres will still be chugging along and getting better and better.

Anyways, all this vector stuff is going to fade away as context windows get larger (already started over the past 8 months or so).

  • qeternity 6 hours ago

    > Also, all this vector stuff is going to fade away as context windows get larger (already started over the past 8 months or so).

    People who say this really have not thought this through, or simply don't understand what the usecases for vector search are.

    But even if you had infinite context, with perfect attention, attention isn't free. Even if you had linear attention. It's much much cheaper to index your data than it is to reprocess everything. You don't go around scanning entire databases when you're just interested in row id=X

    • foobar10000 5 hours ago

      IMO for some things RAG works great, and for others you may need attention, and hence why the completely disparate experiences about RAG.

      As an example, if one is chunking inputs into a RAG, one is basically hardcoding a feature based on locality - which may or may not work. If it works - as in, it is a good feature (the attention matrix is really tail-heavy - LSTMs would work, etc...) - then hey, vector DBs work beautifully. But for many things where people have trouble with RAG, the locality assumption is heavily violated - and there you _need_ the full-on attention matrix.

  • tacoooooooo 5 hours ago

    > Anyways, all this vector stuff is going to fade away as context windows get larger (already started over the past 8 months or so).

    We're searching across millions of documents, so i doubt it

hmans 44 minutes ago

[dead]

jgoode19 4 hours ago

[flagged]

  • rudedogg 4 hours ago

    Sorry but LLMs aren’t good enough to hide that your comment is slop.

    It’s funny I can tell you’re using Claude by the phrasing as well

    @dang please see this and other comments by this user

indigo945 5 hours ago

    > None of the blogs mention that building an HNSW index on a few million vectors 
    > can consume 10+ GB of RAM or more (depending on your vector dimensions and 
    > dataset size). On your production database. While it’s running. For potentially 
    > hours.
10 GB? Oh jolly gosh! That will almost show up as a pixel or two on my metrics dashboard.

Who are these people that run production Postgres clusters on tiny hardware and then complain? Has AWS marketing really confused people into believing that some EC2 "instance size" is an actual server?

  • tacoooooooo 4 hours ago

    guess it depends on your scale? for some, 10+ GB of RAM being consumed on an index build is > 25% of the DB's RAM. apply that same proportion to your setup and maybe it'll make more sense

  • cdelsolar 2 hours ago

    10GB of ram is a pixel? how big is your company?