When Proposals Lose Their Edge: How Cheap Writing Tech Is Transforming Hiring Signals on Freelance Platforms
Imagine a world where writing a tailored job application costs barely anything. Sounds great for job seekers, right? Not so fast. A recent study dives into what happens when generative writing tech—think of it as ultra-cheap, high-quality proposal writing—slips into the hiring market. The researchers use data from Freelancer.com, one of the internet’s biggest digital labor marketplaces, to ask a big question: Do the signals workers used to signal ability and effort still matter when the cost of signaling collapses?
Their answer is both eye-opening and a tad unsettling. Before cheap writing, carefully crafted applications functioned as a Spence-like signal of a worker’s ability and commitment. After cheap writing becomes the norm, those signals lose their bite, and hiring become less meritocratic and a bit more price-driven. The paper not only documents this shift descriptively but also builds a structural model to quantify what happens if signaling disappears entirely. The counterfactual—no signaling at all—reveals a market that hires fewer high-ability workers and more low-ability workers, with meaningful welfare implications for workers.
Below is a friendly, accessible breakdown of the main ideas, what the data show, and why it matters for the futures of work, hiring, and platform design. I’ll keep the jargon light and use intuitive examples and analogies to help you picture the dynamics at play.
A quick map of the idea: signaling, cost, and cheap writing
Signaling 101: In many markets, people send signals to indicate hidden qualities. Spence’s classic model shows that people invest effort (costly signaling) to reveal something about their ability. The idea is simple: a tailored cover letter or an essay can credibly convey aptitude that isn’t obvious from a résumé alone.
The twist: Generative AI now makes writing cheap. If everyone can produce polished, customized text at near-zero cost, signals based on writing quality should become less informative. That could scramble how employers learn about candidate ability.
The empirical stage: The authors study coding-related gigs on Freelancer.com, using a rich dataset that includes job posts, proposals, bids, timestamps, and even whether a proposal was written with an on-platform AI-writing tool introduced in 2023. They construct a novel LLM-based measure of how tailored a proposal is to a given job post and pair that with data on employer choices to infer signaling value.
The big takeaway: Before cheap writing, signals carried a lot of information about ability and effort; after cheap writing, signals become noisy and less predictive of outcomes. The authors even run a counterfactual where signaling vanishes to show how hiring patterns and welfare would shift.
If you want the short version: signaling (especially costly, job-specific writing) used to be a credible way for employers to separate high-skill workers from low-skill ones. Cheap writing tech disrupts that signaling channel, making it harder for employers to tell who’s truly capable, and that shifts who gets hired and how much workers get paid.
1) The anatomy of signaling in the pre-LLM era
What counted as a signal? Workers wrote proposals (and bids) tailored to the job post. The content of the proposal, plus how much time a worker spent crafting it, served as a signal of their effort and, by extension, their ability.
The clever measurement: The researchers didn’t rely on subjective human judgments alone. They built a scalable, LLM-based scoring system to rate how customized a proposal is to a specific job post. The score blends two kinds of signals:
- Custom signals: Evidence that the worker read the job details and tailored the proposal (e.g., noting specifics of the task, showing initiative, avoiding boilerplate).
- Generic signals: Evidence of quality (e.g., listing relevant skills, showing experience, good English, professional tone).
Signaling effort: They use “bid time”—how long a worker spends from first clicking the job post to submitting the proposal—as a proxy for signaling effort.
What the data showed (descriptively) in the pre-LLM period:
- Higher signal meant higher employer willingness to pay. A one standard deviation bump in the signal equated to almost a $26 bid-decrease-equivalent in terms of demand, indicating employers valued the signal.
- Signals predicted effort, and effort predicted job success: more customized proposals tended to come from workers who put in more effort, and higher effort was linked to better completion outcomes.
- Observable characteristics (like reputation) didn’t do a great job predicting the ability signaled by proposals. The signaling content told you more about ability than the workers’ on-platform stats.
So, pre-LLM, signaling mattered. Proposals weren’t just fluff; they were informative about who could do the job well.
2) The disruption: post-LLM dynamics and the on-platform AI writing tool
The LLM era enters in April 2023 with an on-platform AI-writing tool. Workers who paid for a tier could generate proposals at the click of a button, then edit them if they wished.
What changes in the signals?
- The distribution of signals shifts: in the post-LLM period, the average signal rises, and the variance expands—thanks largely to those using AI to generate proposals.
- But the payoff from signaling collapses. When you look at the post-LLM period, the link between signal and employer demand weakens dramatically. The hiring probability becomes almost flat with respect to signal.
A crucial detail: even proposals produced with AI-writing tools show a different pattern. They exhibit a strong shift toward high signal levels, but those signals no longer reliably predict effort or outcomes. In other words, the signal is no longer a credible or relevant indicator of the underlying ability or likely success.
The upshot: cheap writing does not just trim the cost of signaling; it erodes the signal’s informational value. Employers can’t separate high-ability from low-ability workers using written applications the way they could before.
The paper emphasizes three observed shifts in post-LLM descriptive evidence:
- Signals become weaker predictors of demand.
- For AI-written proposals, signal is negatively related to effort.
- Signals no longer predict job completion conditional on being hired.
In plain terms: the same text that used to separate the best applicants from the rest now looks, to employers, increasingly similar across applicants. The perceived value of a customized cover letter as a predictor of performance fades.
3) A structural model: putting signaling on a testable footing
To go beyond descriptive patterns and quantify what happens if signaling truly disappears, the authors build a structural model that fuses three classic ideas:
Spence signaling: Workers invest costly effort to produce signals correlated with their ability. Signals aren’t perfect, but they carry information in equilibrium.
Discrete choice demand: Employers form utilities over applicant characteristics and beliefs about ability, then choose whom to hire from a subset of applicants they consider.
A scoring auction: Applicants compete on multiple dimensions (bids, signals, observable traits) to win the contract.
Key identification challenges they tackle:
- Distinguishing costs of signaling (effort) from strategic bidding decisions.
- Unpacking employers’ beliefs about ability that depend on bids, signals, and observable traits.
- Ensuring that the model’s inferred beliefs and costs line up with the observed hiring outcomes.
How they proceed, in simple terms:
- They first identify workers’ beliefs about their own chances of being hired from observed hiring decisions, conditional on bids and effort.
- With those beliefs pinned down, they invert the first-order conditions to recover each worker’s cost and ability from observed bids and efforts.
- They then recover how employers form beliefs about ability as a function of bids and signals, and estimate the demand side.
Estimation method (in three stages, conceptually):
1) Invert worker optimization to back out costs and abilities from bids, efforts, and observed hiring outcomes.
2) Nonparametrically estimate how employers form beliefs about ability from signals and bids.
3) Maximize the likelihood of observed employer choices given the inferred supply and the learned beliefs to pin down demand parameters.
What they learn from the pre-LLM data (the signaling world as a baseline):
- Employers’ willingness to pay for ability: about $52.16 on average for a one standard deviation increase in ability (roughly 79% of a typical bid’s standard deviation in their sample).
- Ability dispersion: big gaps across workers—employers value hiring someone at the 80th percentile more than someone at the 20th by about $97.
- Observable characteristics explain only a very small slice of ability variation (roughly 3%).
- The correlation between the measured signal and estimated ability is about 0.55, indicating a meaningful, but imperfect, signaling link.
- The correlation between ability and cost is positive but modest (about 0.19), implying higher-ability workers tend to have higher costs, on average.
Counterfactual: what if signaling disappears?
- They simulate a no-signaling world where workers only bid and employers only use observable traits to form beliefs about ability (i.e., signaling is zero).
- Compared to the pre-LLM signaling world:
- Hiring shifts toward lower-ability workers: top quintile hires drop 19%, bottom quintile hires rise 14%.
- This is driven by the loss of a reliable signal that used to separate high from low ability when priced signaling mattered.
- The market also becomes less efficient: total surplus falls about 1%, and worker welfare drops, while employer surplus barely budges.
Why does this happen? A few key mechanisms:
- When signaling goes away, employers can’t differentiate on ability as well using pre-hire text alone.
- Because ability and cost are positively correlated, competition on wages pulls down the demand for higher-ability workers more sharply than it helps lower-ability workers.
- Since observable characteristics don’t do a great job predicting ability, employers have little to fall back on to distinguish high from low ability.
Bottom line from the counterfactual: the absence of signaling makes the labor market on this platform less meritocratic, with a small hit to efficiency and a notable hit to workers’ welfare, particularly for those at the top of the ability distribution.
4) Practical implications: what does this mean for workers, employers, and platforms?
For job seekers:
- The era of “write it once, win it all” is fading. If your ability can be signaled mainly through costly writing, the signal loses value as AI-created content becomes widespread.
- Rely less on a single ultra-tailored, expensive proposal as the sole path to standing out. Consider building verifiable, on-the-job signals of ability (projects, certifications, a demonstrated track record, open-source work) that can’t be replicated by a generic AI-generated text.
- Time spent on signaling still matters, but the payoff structure shifts: effort now needs to translate into observable outcomes and repeated performance, not just a high-quality cover letter.
For employers:
- Be aware that signals in proposals may no longer be a reliable proxy for ability. Rely more on actual performance data, on-platform history, sample work, and perhaps live assessments or trial projects that reveal competency on the job.
- Invest in screening tools that go beyond pre-hire text signals. This could include standardized task-based evaluations, trial gigs, or other on-platform experiments that reveal how a worker performs in real tasks.
For platforms:
- The no-signaling counterfactual highlights a potential design shift: platforms could foster stronger on-job learning signals, like staged tasks, transparent evaluation rubrics, or post-hire performance signals that remain verifiable and harder to game with generic AI content.
- Consider balancing the benefits of AI-assisted proposals with safeguards that encourage genuine signal integrity, such as requiring evidence of work samples, showing evolution of skills over multiple jobs, or offering post-hire performance-based rewards.
Real-world takeaway:
- In markets that hinge on costly written communication for sorting, cheap signaling tools threaten meritocracy and efficiency. If you’re designing or participating in such markets, focus on screening methods that capture true ability and performance—beyond the content of pre-hire proposals.
5) Wider takeaways: what this means for the future of work
The study provides a rare, large-scale view into how a market-wide signaling channel can be eroded by a technology that makes a core activity almost frictionless. It’s not just about who wins or loses in a single market; it’s about the structural rebalancing of how firms learn who to hire when one of the most informative signals—the costlier, customized writing—no longer costs much to produce.
The authors’ conclusion points to a broader narrative: in markets that rely on costly communication to distinguish types, AI-enabled automation can undermine signaling and tilt outcomes toward cheaper bidders. On the flip side, for markets that use communication to inform (not persuade) or to guide exploratory learning on the job, AI could potentially enhance efficiency by removing bottlenecks in conveyance.
A hopeful note for the design of future platforms and labor markets: think about how to preserve valuable signals while leveraging cheap, high-quality writing. That could mean designing tasks that reveal ability on the job, building richer feedback loops, or creating standardized skill assessments that are harder to fake with generic text.
6) Limitations and areas for future work
- The study focuses on a specific type of freelance coding work on Freelancer.com. While the insights are compelling, different markets (e.g., long-term hires, non-digital tasks) may exhibit different signaling dynamics.
- The no-signaling counterfactual isolates signaling, but real-world LLM effects likely spill into many other channels—productivity, task design, and task types—that also influence hiring and welfare. The authors acknowledge this and separate the signaling channel for clarity.
- As with any structural model, the estimates hinge on identification assumptions and modeling choices. The authors take care to lay out their identification strategy, but, as always, the results should be interpreted in light of those assumptions.
7) Final thoughts: lessons for readers curious about prompting and signaling
- If you’re a job seeker crafting proposals in a world where writing can be generated cheaply, focus on more than just the text. Build a portfolio of verifiable outcomes, real projects, and measurable skills that stand up to AI-generated competition.
- If you’re a hiring manager or platform designer, consider combining written signals with on-the-job demonstrations, practical tests, and transparent evaluation criteria. Signals matter, but only if they reliably map to real performance.
- For prompt designers and AI practitioners: this paper highlights a key truth about signaling versus informing. Text that persuades (signaling) can be gamed or diluted when cost drops, while information that captures actual ability or performance (informing tasks, assessments, or observable outputs) tends to be more robust to cheap text generation. Think about building prompts and tools that surface and verify on-the-job performance rather than merely generating convincing prose.
If you want to experiment with a prompt approach inspired by these ideas, try: “Show me a portfolio of three real tasks you’ve completed in a similar domain, with links or verifiable results; describe the challenge, your approach, the outcome, and one lesson learned.” This shifts the emphasis from a glossy tailored pitch to tangible, trackable capability—an approach that may fare better as signaling costs plummet.
Key Takeaways
Before cheap writing, highly customized proposals on Freelancer.com acted as a credible signal of a worker’s ability and effort, guiding employer hiring decisions and wages.
A large, data-rich study shows that signaling mattered in a pre-LLM world: higher signals predicted higher demand, and signals correlated with better job outcomes through their link to effort.
With the mass adoption of generative writing tools (and especially the on-platform AI-writing feature introduced in 2023), proposal signals became cheaper to produce and less informative for employers.
The post-LLM world features a weakened relationship between signals and hiring decisions. Proposals written with AI show elevated signal scores, but these signals no longer reliably predict effort or success.
A structural model combining Spence signaling, discrete choice demand, and a scoring auction reveals that removing signaling would push hiring toward lower-ability workers, reduce worker welfare, and modestly reduce overall market efficiency.
The welfare result is nuanced: employers’ surplus is nearly unchanged, but workers bear the brunt as high-ability workers struggle to distinguish themselves by price alone, especially since higher-ability workers also tend to have higher costs.
Practical implications: platforms and employers should invest in screening methods and on-job assessments that survive the era of cheap text. Workers should diversify signaling channels (projects, verifiable results) beyond costly but easily replicated proposals.
The broader takeaway: in markets built on costly written communication, cheap generation tools threaten meritocratic sorting. Designing hiring systems that combine robust on-the-job signals with fair, transparent evaluation will be crucial as AI-enabled writing becomes ubiquitous.
If you’re curious about prompting and want a quick edge: lean into prompts that prompt for verifiable work samples, project outcomes, and measurable results rather than relying solely on narrative tailoring. That’s one practical way to adapt to a world where the signal from text is increasingly decoupled from actual ability.