Building A Token Canister With ic-stable-memory

​In this tutorial, we'll build a token canister with ic-stable-memory library. This canister will use stable memory as a main storage resource, which allows you to store more that 4GBs of data into it (bigger canister). Also, upgrading such a token canister is a super-cheap procedure, no matter how much data does this canister hold.
​Library repository: github.com/seniorjoinu/ic-stable-memory.​Complete code of this tutorial is here.

​Prerequisites

​Project initialization

​You can use $ dfx new --type rust <project-name> command, if you like it, but I'm a bit conservative with this, so in this tutorial I'll show you a more complicated, but a more flexible way of project initialization.
​First, use cargo new <project-name> command in order to create a new rust project. Then make a file called <project-name>/Cargo.toml look like this:
# Cargo.toml [package] name = "stable-token" version = "0.1.0" edition = "2021" [profile.release] codegen-units = 1 strip = true lto = true opt-level = 'z' panic = 'abort' [lib] path = "src/actor.rs" crate-type = ["cdylib"] [dependencies] ic-cdk = "0.5.2" ic-cdk-macros = "0.5.2" serde = "1.0.138" candid = "0.7.14" speedy = "0.8.2" ic-stable-memory = "0.0.2"
​Then create a file called <project-name>/dfx.json with this content:
// dfx.json { "canisters": { "stable-token": { "build": "./build.sh", "candid": "./can.did", "wasm": "target/wasm32-unknown-unknown/release/stable-token-opt.wasm", "type": "custom" } }, "dfx": "0.10.1", "networks": { "local": { "bind": "127.0.0.1:8000", "type": "ephemeral" } }, "version": 1 }
​Finally, create a file called <project-name>/build.sh which looks like this:
# build.sh #!/bin/bash SCRIPT=$(readlink -f "$0") SCRIPTPATH=$(dirname "$SCRIPT") cd "$SCRIPTPATH" || exit cargo build --target wasm32-unknown-unknown --package stable-token --release && \ ic-cdk-optimizer ./target/wasm32-unknown-unknown/release/stable_token.wasm -o ./target/wasm32-unknown-unknown/release/stable-token-opt.wasm
​Congratulations! We've just initialized our project.

​The token

​On the IC any token basically consists of two parts: a table of account balances and a transaction history ledger. In real scenario, it is more preferable to keep these two parts on separate canisters. This is because a history ledger can scale quite fast and to infinity and beyond, and you want to support this infinite scaling with dynamic canister spawning. So, in real scenario, you would have a single account balances canister and a growing amount of history ledger canisters.
​But in this tutorial, for the simplicity, we'll follow a simple single-canister design. ​Let's create a file called <project-name>/src/actor.rs that will define our canister's logic.
​In order to store account balances we'll use SHashMap stable collection with Principals as keys and u64s as values. To store history entries, we will use SVec stable collection, with the following structure as a value type:
// src/actor.rs #[derive(CandidType, Deserialize, Readable, Writable)] struct HistoryEntry { // option, because of minting pub from: Option<SPrincipal>, // option, because of burning pub to: Option<SPrincipal>, pub qty: u64, pub timestamp: u64, }
Notice that we use​ SPrincipal, instead of the regular Principal type. This is not a big deal - SPrincipal is just a wrapper for Principal, which implements speedy::Readable and speedy::Writable.
​Also notice, that our HistoryEntry type implements both: CandidType, Deserialize and Readable, Writable. This is because this type will be saved in stable memory using Speedy and passed through the network using Candid.
So, let's define our stable variables for this:
// src/actor.rs // type aliases, mandatory for stable variables type AccountBalances = SHashMap<SPrincipal, u64>; type TransactionLedger = SVec<HistoryEntry>; type TotalSupply = u64; #[init] fn init() { // initialize stable memory stable_memory_init(true, 0); // initialize stable variables s! { AccountBalances = AccountBalances::new() }; s! { TransactionLedger = TransactionLedger::new() }; s! { TotalSupply = 0 }; }
And this is it. Our stable variables are ready to store anything we want. The only thing that is left is to correctly handle pointers between canister upgrades:
// src/actor.rs #[pre_upgrade] fn pre_upgrade() { // save stable memory meta-info (cheap) stable_memory_pre_upgrade(); } #[post_upgrade] fn post_upgrade() { // reinitialize stable memory and variables (cheap) stable_memory_post_upgrade(0); }
​During the pre_upgrade() hook, we do not store the values to stable memory - they are already there. The only thing we do here is storing the pointer to our stable variables storage (you can read more about it in this article). The same goes for post_upgrade() - we just read the allocator and restore a pointer to the stable variables storage.

​Minting

​All the preparations are over, now let's move on to the business logic of our token canister starting from minting function:
// src/actor.rs #[update] fn mint(to: SPrincipal, qty: u64) { // update token let mut balances = s!(AccountBalances); let balance = balances.get_cloned(&to) .unwrap_or_default(); balances.insert(to, &(balance + qty)); s! { AccountBalances = balances }; // update total supply let total_supply = s!(TotalSupply); s! { TotalSupply = total_supply + qty }; // emit ledger entry let entry = HistoryEntry { from: None, to: Some(to), qty, timestamp: time(), }; let mut ledger = s!(TransactionLedger); ledger.push(&entry); s! { TransactionLedger = ledger }; }
​This function is straightforward. We just get the stable variable that holds account balances, we read the balance of an account we want to mint new tokens to, mint those new tokens and store the pointer to a stable variable back to the stable variables storage.
​Then we do the same thing with the total supply. And then we create and push a new transaction history entry to the ledger stable variable.

​Transferring and burning

​Functions for transferring and burning are also pretty straightforward, so I'll just drop their code here without deeper explanations:
// src/actor.rs #[update] fn transfer(to: SPrincipal, qty: u64) { let from = SPrincipal(caller()); // update balances let mut balances = s!(AccountBalances); let from_balance = balances.get_cloned(&from) .unwrap_or_default(); assert!(from_balance >= qty, "Insufficient funds"); balances.insert(from, &(from_balance - qty)); let to_balance = balances.get_cloned(&to) .unwrap_or_default(); balances.insert(to, &(to_balance + qty)); s! { AccountBalances = balances }; // emit ledger entry let entry = HistoryEntry { from: Some(from), to: Some(to), qty, timestamp: time(), }; let mut ledger = s!(TransactionLedger); ledger.push(&entry); s! { TransactionLedger = ledger }; } #[update] fn burn(qty: u64) { let from = SPrincipal(caller()); // update balances let mut balances = s!(AccountBalances); let from_balance = balances.get_cloned(&from) .unwrap_or_default(); assert!(from_balance >= qty, "Insufficient funds"); balances.insert(from, &(from_balance - qty)); s! { AccountBalances = balances }; let total_supply = s!(TotalSupply); s! { TotalSupply = total_supply - qty }; // emit ledger entry let entry = HistoryEntry { from: Some(from), to: None, qty, timestamp: time(), }; let mut ledger = s!(TransactionLedger); ledger.push(&entry); s! { TransactionLedger = ledger }; }

​Getters

​Great! Now we want some functions to query data from our canister: balances, total supply and history. Let's define them:
// src/actor.rs #[query] fn balance_of(of: Principal) -> u64 { s!(AccountBalances).get_cloned(&of) .unwrap_or_default() } #[query] fn total_supply() -> u64 { s!(TotalSupply) } #[query] fn get_history(page_index: u64, page_size: u64) -> (Vec<HistoryEntry>, u64) { let from = page_index * page_size; let to = page_index * page_size + page_size; let ledger = s!(TransactionLedger); let mut result = vec![]; let total_pages = ledger.len() / page_size + 1; for i in from..to { if let Some(entry) = ledger.get_cloned(i) { result.push(entry); } } (result, total_pages) }
​The first two functions are pretty basic. When we want to return someone's balance or total supply, we just get a stable variable with this data and fetch it.
​But returning a ledger history is not so easy. IC's messages can not be bigger than ~2MBs. This means, that if we would try to return a list of transactions bigger in size than this maximum message size limit, we would get an error. This means, that in order for everything to work correctly, we have to introduce a pagination here.
​Our pagination technique is very simple - we just pass a couple of indices and expect the function to return us a subarray of entries located at this indices and a number of total pages of this page size. For example, if we have 1000 history entries, and someone passes (10, 20) to this function, then it will return ten history entries from 10th to 20th and 100 as the number of total pages.
​So, for the business logic - this is basically it. Another thing that may be useful is to have some way of monitoring the amount of stable memory our canister uses. Stable memory allocator provides a couple of functions for such a monitoring, so let's define a getter for this info:
#[query] fn mem_metrics() -> (u64, u64, u64) { ( // available stable::size_pages() * PAGE_SIZE_BYTES as u64, // allocated get_allocated_size(), // free get_free_size(), ) }
​This function returns three different numbers:
Now it works in such a way, so available + C = allocated + free. Where C bytes is the size of the allocator itself (so it doesn't take it into an account).

​Trying it out

​Okay, let's deploy this canister locally and see if it works.
$ dfx start --clean (in a separate terminal)
​$ dfx deploy
The terminal should output something like this:
Deployed canisters. URLs: Candid: stable-token: http://127.0.0.1:8000/?canisterId=ryjl3-tyaaa-aaaaa-aaaba-cai&id=rrkah-fqaaa-aaaaa-aaaaq-cai
​Let's follow that link and we see a CandidUI interface for our canister.
​Okay, let's call the mem_metrics() function. Initially, it should output something like this:
› mem_metrics() ​(65536, 1424, 63828)
which means that there is 64Kb of stable memory available total, with 1424 bytes are already used (by initial metadata for our stable variables) and 63828 bytes are free.
​Let's mint some tokens to an account identifier "aaaaa-aa" and see the transaction history:
› get_history(0, 10) ( vec { record { to=opt principal "aaaaa-aa"; qty=1000; from=null; timestamp=1657045294189284700; }; record { to=opt principal "aaaaa-aa"; qty=1000; from=null; timestamp=1657045299394280079 }; record { to=opt principal "aaaaa-aa"; qty=20; from=null; timestamp=1657045336540059649 } }, 1 )
​As we can see, there are some transactions in the history ledger. Let's see what how many tokens does "aaaaa-aa" now have:
› balance_of(principal "aaaaa-aa") (2020)
​Great. Let's see what happened with our stable memory:
› mem_metrics() (196608, 82525, 113799)
​Oh my god, with just three transactions we somehow wasted a whole page of stable memory and even more. How is that possible?
Don't worry, it is fine. This is because of how SHashMap works - initially it doesn't even allocate it's table, but once a first key is inserted into it, it does allocate the table and this table is, by default, 64K in size. The same goes for SVec - it only starts allocating new sectors once a first value is pushed to it. You can check it yourself - just mint more tokens and you'll see that memory usage does not spike anymore.

​Upgrading the canister

​But the most important thing is to see if this data will persist between canister upgrades. As you remember, we didn't add any specific code in pre_upgrade() and post_upgrade() - we only handled some pointers.
​Let's change something (so dfx would count our code as a new version) and perform the upgrade.
$ dfx deploy
​Let's now try to fetch history transactions again:
› get_history(0, 10) ( vec { record { to=opt principal "aaaaa-aa"; qty=1000; from=null; timestamp=1657045294189284700; }; record { to=opt principal "aaaaa-aa"; qty=1000; from=null; timestamp=1657045299394280079 }; record { to=opt principal "aaaaa-aa"; qty=20; from=null; timestamp=1657045336540059649 } }, 1 )
​Aaaaand... nothing changed. Exactly as it should. Our data did migrate flawlessly to the new version of software.
Yes, this is the same exact behavior, that we get without ic-stable-memory, but the difference is that now we don't care at all how much data does our canister hold: 1MB or 10GB. Even with 100GB of data your canister will still be able to upgrade itself (which is impossible to achieve with the traditional approach).

​Conclusion

​As you can see, ic-stable-memory is a powerful and easy-to-use tool, when it comes to storing lots of data. As a developer you just have to write a little bit more verbose code and everything else happens auto-magically.
​Complete source code of this tutorial can be found here.
Take care!
Made with Papyrs