Rust Server Performance and Optimization
Optimize your Rust server on Witchly.host for the best performance. World size recommendations, save intervals, entity management, and plugin optimization.
Rust (7 articles)
On This Page
Rust Server Performance and Optimization
Rust is one of the most resource-intensive game servers to host. The game simulates a persistent open world with complex physics, entity interactions, decay systems, and AI — all of which demand significant CPU and memory. This guide covers practical optimization strategies to keep your Witchly.host Rust server running smoothly.
Understanding Rust Server Performance
Before optimizing, it helps to understand what affects performance:
- Entity count — Every placed item, structure, storage container, sleeping bag, and deployed object is an entity. This is the single biggest performance factor.
- World size — Larger maps require more memory and CPU for terrain, monument, and entity management.
- Player count — More players means more entity interactions, more network traffic, and more world changes per tick.
- Plugins — Oxide plugins add processing overhead, especially those that run on every server tick.
- Save operations — Auto-saves write the entire world state to disk, causing brief lag spikes proportional to save file size.
Key metric — Server FPS:
Rust servers target 30 FPS (not to be confused with client FPS). Check your server FPS with the fps command in the console. Consistent FPS above 20 indicates healthy performance. Below 15, players will experience noticeable lag.
World Size Recommendations
World size directly impacts RAM usage and entity capacity. Choose your map size based on your plan and expected player count:
The Radtown (10 GB RAM):
| Player Count | Recommended World Size | Notes |
|---|---|---|
| 10-25 | 2000-2500 | Compact map, frequent PvP encounters |
| 25-50 | 2500-3000 | Balanced space and density |
| 50-75 | 3000-3500 | Maximum recommended for this plan |
The Launch Site (16 GB RAM):
| Player Count | Recommended World Size | Notes |
|---|---|---|
| 25-50 | 2500-3500 | Plenty of space, good monument access |
| 50-100 | 3500-4000 | Balanced for medium-large communities |
| 100-150 | 4000-4500 | Maximum recommended for this plan |
General rules:
- Every 500 units of world size adds roughly 500 MB - 1 GB of RAM usage
- Maps above 4500 are rarely necessary and significantly increase resource consumption
- A map that is too large for your player count feels empty and spreads players thin
- A map that is too small creates excessive PvP and resource competition
Save Interval Tuning
The server.saveinterval setting controls how often the server writes world data to disk. During a save, the server may experience a brief lag spike.
| Interval | Pros | Cons |
|---|---|---|
| 300 seconds (5 min) | Less data loss on crash, more frequent recovery points | More frequent save lag spikes |
| 600 seconds (10 min) | Standard balance, default setting | Moderate data loss risk on crash |
| 900 seconds (15 min) | Fewer lag spikes from saves | More progress lost on unexpected crash |
Recommendations:
- Default (600 seconds) is appropriate for most servers
- Reduce to 300 if your server has valuable plugin data or economy systems where loss would be disruptive
- Increase to 900 if save lag spikes are noticeable and your server is stable (rare crashes)
- Monitor save file size — larger files take longer to write. Check your save file in
server/my_server_identity/
Entity Management
Entities are the primary driver of server performance. A well-optimized server actively manages entity count.
Checking entity count:
Run ent count in the dashboard console. Here are general benchmarks:
| Entity Count | Performance Impact |
|---|---|
| Under 100,000 | Healthy — no optimization needed |
| 100,000 - 200,000 | Moderate — monitor FPS, decay should be managing this |
| 200,000 - 300,000 | High — performance degradation likely, consider a wipe soon |
| Above 300,000 | Critical — expect significant lag, wipe recommended |
Reducing entity count:
-
Enable decay — Never disable decay. Set
decay.scaleto at least1.0. Decay removes abandoned structures over time, naturally cleaning up entities. See our Server Configuration guide. -
Enforce upkeep — Keep
decay.upkeepenabled. Upkeep ensures bases require materials to maintain, causing unmaintained bases to decay faster. -
Wipe regularly — Regular wipes reset entity count to zero. Weekly or biweekly wipes prevent entity buildup. See our Wipe Guide.
-
Admin cleanup — Use admin commands to remove problematic entities:
ent kill <entity-type>— Removes all entities of a specific type- Use this cautiously and only for cleanup of clearly abandoned or problematic entities
-
Limit deployable stacking — Consider plugins that limit how many entities players can place per area
Garbage Collection Optimization
Rust’s server uses .NET garbage collection (GC), which periodically pauses to clean up memory. These pauses can cause brief lag spikes.
Monitoring GC:
Watch the console for GC-related messages. Frequent or long GC pauses indicate memory pressure.
Reducing GC impact:
- Keep your server’s memory usage within your plan’s limits
- Reduce world size if memory is consistently maxed out
- Remove memory-intensive plugins
- Ensure you are not running unnecessary background processes
Oxide Plugin Performance
Plugins are the second-largest performance factor after entity count. Each plugin adds CPU overhead, and poorly written plugins can severely impact performance.
High-impact plugin categories:
- Tick-based plugins — Plugins that run code every server tick (30 times per second) are the most expensive. Examples include anti-cheat plugins, automated systems, and real-time monitoring.
- Entity-tracking plugins — Plugins that monitor all entities or large numbers of players consume significant CPU.
- Database-heavy plugins — Plugins that frequently read/write to storage can cause IO bottlenecks.
Plugin optimization strategies:
- Audit your plugins regularly — Remove any plugins your community does not actively use
- Check plugin performance — Use the
oxide.showcommand to see load times. Plugins taking over 100ms to hook events may need attention. - Limit redundant plugins — Multiple plugins doing similar things (e.g., two chat plugins, two economy systems) waste resources
- Keep plugins updated — Plugin authors often release performance improvements in updates
- Configure plugin intervals — Many plugins have configurable tick rates or intervals. Increase these where real-time processing is not required.
Recommended plugin counts:
- The Radtown (10 GB): 15-25 plugins for optimal performance
- The Launch Site (16 GB): 25-40 plugins comfortably, depending on complexity
Network Optimization
Network performance affects player experience, especially for servers with higher player counts:
- Server tickrate — Keep at the default 30. Changing this is not recommended and can cause instability.
- Max players — Set this to a realistic number for your plan. Overloading player count beyond your plan’s capability causes lag for everyone.
- Server location — Choose a server location geographically close to the majority of your player base for lower latency.
Monitoring Server Health
Regularly check these metrics through your dashboard console:
| Command | What It Shows | Healthy Range |
|---|---|---|
fps | Server frames per second | Above 20 |
ent count | Total entity count | Under 200,000 |
status | Connected players | Within your plan’s recommended range |
serverinfo | Memory usage, uptime, performance | Memory under plan limit |
Establish a monitoring routine:
- Check FPS and entity count daily
- Review memory usage weekly
- Compare performance metrics across wipe cycles
- Note when performance starts degrading to inform your wipe schedule
Performance Checklist
Use this checklist to keep your server optimized:
- World size is appropriate for your plan and player count
- Decay is enabled with
decay.scaleat 1.0 or higher - Upkeep is enabled (
decay.upkeep true) - Save interval is set between 300-900 seconds
- Entity count is monitored regularly
- Only necessary plugins are installed
- Plugins are kept up to date
- Player count limit matches your plan’s capacity
- Regular wipe schedule is in place
- Server FPS is monitored and stays above 20
When to Upgrade
Consider upgrading from The Radtown to The Launch Site if:
- Server FPS consistently drops below 20 during peak hours
- Memory usage regularly hits the plan limit
- You need a larger world size or higher player count
- You want to run more plugins without performance trade-offs
- Entity count reaches critical levels before your scheduled wipe
Troubleshooting Performance Issues
Sudden FPS drop:
- Check entity count — a large raid or base construction can spike entities
- Check for newly installed or updated plugins
- Review console for error spam (errors consume CPU)
- Run
serverinfoto check memory usage
Gradual performance decline:
- Normal over a wipe cycle as entities accumulate
- Check if decay is working properly
- Consider adjusting your wipe schedule
- Review entity count trends
Save lag spikes:
- Increase save interval if spikes are too frequent
- Check save file size — very large files indicate high entity count
- A wipe resolves this immediately
Memory warnings or crashes:
- Reduce world size
- Remove resource-heavy plugins
- Lower max player count
- Upgrade your plan
For persistent performance issues, reach out to our support team at Discord and we can help diagnose the problem.