<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Programming and Stuff]]></title><description><![CDATA[Programming and Stuff]]></description><link>https://tonyistomin.xyz/</link><generator>Ghost 4.29</generator><lastBuildDate>Fri, 20 Mar 2026 04:09:30 GMT</lastBuildDate><atom:link href="https://tonyistomin.xyz/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The Real Deal: Optimizing Event-Sourcing with MongoDB and Prisma]]></title><description><![CDATA[<p>This story is about the technical challenges we faced and what we learned at HeatTransformers when rewriting our event-sourcing patterns to improve read performance by orders of magnitude.</p><h2 id="introduction">Introduction</h2><p>To explain what &quot;The Real Deal&quot; is and why it was necessary for us, we need to start by</p>]]></description><link>https://tonyistomin.xyz/the-real-deal/</link><guid isPermaLink="false">67e4569d5b1eff000105c138</guid><category><![CDATA[event-sourcing]]></category><category><![CDATA[typescript]]></category><category><![CDATA[mongodb]]></category><dc:creator><![CDATA[Anton Istomin]]></dc:creator><pubDate>Tue, 17 Jun 2025 09:38:38 GMT</pubDate><media:content url="https://tonyistomin.xyz/content/images/2025/05/real_deal_bg_ht--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://tonyistomin.xyz/content/images/2025/05/real_deal_bg_ht--1-.png" alt="The Real Deal: Optimizing Event-Sourcing with MongoDB and Prisma"><p>This story is about the technical challenges we faced and what we learned at HeatTransformers when rewriting our event-sourcing patterns to improve read performance by orders of magnitude.</p><h2 id="introduction">Introduction</h2><p>To explain what &quot;The Real Deal&quot; is and why it was necessary for us, we need to start by describing what event-sourcing is and how we used to implement it in our system.</p><h3 id="what-is-event-sourcing">What is Event-Sourcing</h3><!--kg-card-begin: markdown--><blockquote>
<p><a href="https://www.kurrent.io/event-sourcing">Event Sourcing</a> is an architectural design pattern where changes that occur in a domain are immutably stored as events in an append-only log.<br>
This provides a business with richer data as each change that occurs within the domain is stored as a sequence of events which can be replayed in the order they occurred.</p>
</blockquote>
<!--kg-card-end: markdown--><p>We use TypeScript, MongoDB, and Prisma ORM on our backend. Our main entity in the system which represents the state of a deal with a client is called (surprise) <em>Deal</em>. Each Deal contains some events as a nested list. These are all the changes that happened to this Deal<em> </em>since its creation<em>.</em></p><p>To get the most recent state of the Deal we accumulate all of the Deal&apos;s events and apply them one-by-one using MongoDB aggregation. </p><p>But before we dive into how Deal is structured, let&apos;s clear a few things. Deal is a view on another model called DealEventStream in Prisma schema. This Deal view is constructed using an aggregation on DealEventStream.</p><p>For example, here is DealEventStream:</p><pre><code class="language-JSON">{
  &quot;_id&quot;: {
    &quot;$oid&quot;: &quot;678e4097ce850d2ecda12345&quot;
  },
  &quot;zohoDealId&quot;: &quot;12345000012345678&quot;,
  &quot;events&quot;: [
    {
      &quot;_id&quot;: {
        &quot;$oid&quot;: &quot;678e4097635ed21a39c12345&quot;
      },
      &quot;type&quot;: &quot;configure&quot;,
      &quot;data&quot;: {
        &quot;heatPumpInstallationType&quot;: &quot;HYBRID&quot;,
        &quot;gasConsumption&quot;: 10
      },
      &quot;timestamp&quot;: {
        &quot;$date&quot;: &quot;2025-01-20T12:24:55.906Z&quot;
      }
    },
    {
      &quot;_id&quot;: {
        &quot;$oid&quot;: &quot;678fb322d4f72271d6712345&quot;
      },
      &quot;type&quot;: &quot;configure&quot;,
      &quot;data&quot;: {
        &quot;gasConsumption&quot;: 123
      },
      &quot;userId&quot;: {
        &quot;$oid&quot;: &quot;672a0ce8e74ed63a49812345&quot;
      },
      &quot;timestamp&quot;: {
        &quot;$date&quot;: &quot;2025-01-21T14:45:54.381Z&quot;
      }
    }
  ]
}</code></pre><p>And here is Deal view on this model:</p><pre><code class="language-JSON">{
 &quot;_id&quot;: {
    &quot;$oid&quot;: &quot;678e4097ce850d2ecda12345&quot;
  },
  &quot;zohoDealId&quot;: &quot;12345000012345678&quot;,
  &quot;config&quot;: {
    &quot;heatPumpInstallationType&quot;: &quot;HYBRID&quot;,
    &quot;gasConsumption&quot;: 123
  },
  &quot;events&quot;: [...]
}</code></pre><p>Notice how events are aggregated in the <code>config</code> nested object. The <code>gasConsumption</code> is 10 in the first event, but it is overwritten as <code>123</code> by the second event. And <code>heatPumpInstallationType</code> is <code>HYBRID</code> in config since it is the only such key between two events. There could be more events, and they would be applied the same way to construct the final view. </p><p>There are also fields outside of <code>config</code>, e.g. <code>zohoDealId</code> which is the id of the entity in our CRM. We do not event-source those fields for the performance reasons described below.</p><h3 id="old-aggregation">Old Aggregation</h3><p>I will hide the simplified old MongoDB aggregation that we used to construct Deal view from DealEventStream documents under a spoiler, since it is not as important to the main story. But if you are curious, take a look. It is not easy to read even in this simplified form. </p><!--kg-card-begin: html--><button style="margin-bottom: 40px; height: 40px; line-height: 0px; padding-left:10px; padding-right:10px; background:#555; color:#fff; border-radius:10px" title="Click to show/hide content" type="button" onclick="const className=&apos;aggregation-spoiler&apos;;const spoilerElement=document.getElementById(className);if(spoilerElement.style.display==&apos;none&apos;) {spoilerElement.style.display=&apos;&apos;}else{spoilerElement.style.display=&apos;none&apos;}">
Show/hide aggregation
</button>
<pre id="aggregation-spoiler" style="display: none"><code class="language-typescript">export const dealViewPipeline = [
  {
    $addFields: {
      configEvents: {
        $filter: {
          input: &apos;$events&apos;,
          as: &apos;item&apos;,
          cond: {
            $in: [&apos;$$item.type&apos;, [&apos;configure-power&apos;, &apos;configure&apos;]],
          },
        },
      },
      zohoSync: {
        $ifNull: [
          {
            $getField: {
              field: &apos;data&apos;,
              input: {
                $mergeObjects: {
                  $filter: {
                    input: &apos;$events&apos;,
                    as: &apos;item&apos;,
                    cond: {
                      $eq: [&apos;$$item.type&apos;, &apos;sync-to-zoho&apos;],
                    },
                  },
                },
              },
            },
          },
          {},
        ],
      },
    },
  },
  {
    $addFields: {
      updatedAt: {
        $cond: {
          if: {
            $eq: [
              {
                $size: &apos;$events&apos;,
              },
              0,
            ],
          },
          then: &apos;$createdAt&apos;,
          else: {
            $last: &apos;$events.timestamp&apos;,
          },
        },
      },
      revision: {
        $size: &apos;$events&apos;,
      },
      config: {
        $reduce: {
          input: {
            $map: {
              as: &apos;event&apos;,
              in: &apos;$$event.data&apos;,
              input: &apos;$configEvents&apos;,
            },
          },
          initialValue: initialArrayValuesInDealConfig,
          in: {
            $mergeObjects: [&apos;$$value&apos;, &apos;$$this&apos;],
          },
        },
      },
      configRevision: {
        $size: &apos;$configEvents&apos;,
      },
    },
  },
  {
    $unset: [&apos;configEvents&apos;],
  },
];</code></pre>

<!--kg-card-end: html--><h3 id="slow-performance">Slow Performance</h3><p>Do you notice the downside of this approach? When we need to read the latest state of some Deal we must iterate every Deal&apos;s event, which is <em>slow</em>. How slow? Well, for less than 80k Deals with around 2 million events in total, the execution time was 7923 ms.</p><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2025/04/image.png" class="kg-image" alt="The Real Deal: Optimizing Event-Sourcing with MongoDB and Prisma" loading="lazy" width="322" height="308"></figure><p>Every time we filter Deals by some field on Deal config, MongoDB needs to aggregate <em>every</em> DealEventStream into Deal, check that field, and then return the result. And of course, you cannot easily index aggregated fields in the config.</p><pre><code class="language-typescript">// Takes a lot of time
await prisma.deal.find({ where: { &apos;config.gasConsumption&apos;: 123 }});  </code></pre><p>For our new upcoming projects we needed the high read performance, therefore the old aggregation did not suit us. That is our <strong>motivation</strong> for the Real Deal.</p><p>Can we do better? Yes, definitely.</p><h2 id="new-event-sourcing-solution">New Event-Sourcing Solution</h2><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2025/06/htwrench.png" class="kg-image" alt="The Real Deal: Optimizing Event-Sourcing with MongoDB and Prisma" loading="lazy" width="1024" height="1024" srcset="https://tonyistomin.xyz/content/images/size/w600/2025/06/htwrench.png 600w, https://tonyistomin.xyz/content/images/size/w1000/2025/06/htwrench.png 1000w, https://tonyistomin.xyz/content/images/2025/06/htwrench.png 1024w" sizes="(min-width: 720px) 720px"></figure><p>What if we store the <em>latest</em> state of every Deal instead of aggregating it on every read? Then reads should be blazingly fast and writes just a bit negligibly slower. </p><p>In the new version of event sourcing our Deal will be a model, not a view. We will call it <em>NewDeal</em> so as not to confuse it with the previous view. We also introduce a new model <em>NewDealEvent,</em> which will contain all the events that happened to the instances of NewDeal. </p><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2025/05/image-1.png" class="kg-image" alt="The Real Deal: Optimizing Event-Sourcing with MongoDB and Prisma" loading="lazy" width="449" height="704"></figure><p>Why writes in NewDeal will be a bit slower then? Because for every update of NewDeal we will need to create a NewDealEvent document of type &quot;UPDATE&quot;. This will rebuild some indexes on NewDealEvent but overall it is still a fast operation. The concrete benchmarks of NewDeal will be at the end of the article. &#xA0;</p><p>So we have a vision of the new system, but how do we migrate 80k Deals with 2M+ events to a new system without losing any data and disrupting the work? We went forward with a <em>shadowing</em> approach.</p><h2 id="shadowing">Shadowing</h2><p>The idea behind shadowing is that we first create NewDeal and NewDealEvent collections in production and fill them with data. Then we enable <em>writing</em> in them alongside DealEventStream and closely monitor any errors that occur during the shadowing period. So for any DealEventStream create or update, the corresponding create or update happens to NewDeal. This gives us a chance to find bugs in the new approach and fix them before fully moving to NewDeal.</p><pre><code class="language-typescript">await prisma.dealEventStream.update({
  where: {
    id: deal.id,
  },
  data: {
    partnerCampaignId: campaign.id,
  },
});

// Temporary shadowing for Real Deal
try {
  await prisma.newDeal.updateEntityAndCreateEvent({
    where: { id: deal.id },
    data: {
      partnerCampaignId: campaign.id,
    },
  });
} catch (error) {
  const sentryError = new SentryError(&apos;RealDeal shadowing error&apos;, 500, {
    category: &apos;technical&apos;,
    originalError: error,
    level: &apos;error&apos;,
    data: {
      prismaDealId: deal.id,
      partnerCampaignId: campaign.id,
    },
  });

  Sentry.withScope((scope) =&gt; {
    sentryError.enrichSentryScope(scope);
    Sentry.captureException(sentryError);
  });
}</code></pre><p>If anything bad happens during try-catch, we will see the error in Sentry and can quickly act upon it.</p><p>We found multiple bugs in our implementation during that phase and fixed them. For example, one bug was with transactional logic on creating and updating NewDeal. More in the next sections.</p><p>The next step is to change <em>reading</em> from Deal to NewDeal. Important to have a backup plan to roll this change back in case there is a critical issue. At this point the migration to NewDeal is almost finished. What&apos;s left is to backup and delete old Deal data, and rename NewDeal to Deal.</p><h2 id="conversion-of-deals-into-newdeals">Conversion of Deals into NewDeals</h2><p>To proceed with the shadowing, we need to fill NewDeal and NewDealEvent with the data first. We created a special script for this that can be run at any time on any db. We did not go for the usual database migration because we want the ability to easily re-run the script.</p><p>In this script, we need to aggregate old DealEventStreams into Deals, create NewDeals from these. Then, for each Deal create NewDealEvents. The problem is there are too many deals and events to do that in one batch; such a query would fail. The solution is to split deals and events into several batches and create data batch by batch.</p><p>The following is our initial attempt at creating deals and events. This code snippet shows the processing of one batch of 5000 deals. Can you spot the problem with this?</p><pre><code class="language-typescript">const BATCH_SIZE = 5000;

const newDealsPromise = prisma.newDeal.createMany({
  data: newDealsToCreate,
});

const newDealEventsBatches: NewDealEvent[][] = [];
for (let i = 0; i &lt; newDealEventsToCreate.length; i += BATCH_SIZE) {
  newDealEventsBatches.push(newDealEventsToCreate.slice(i, i + BATCH_SIZE));
}

const newDealEventsPromises = newDealEventsBatches.map(batch =&gt;
  prisma.newDealEvent.createMany({ data: batch })
);

await Promise.all([newDealsPromise, ...newDealEventsPromises]);</code></pre><p>The problem is, of course, the last line. <code>Promise.all</code> will start <em>a lot</em> of connections to MongoDB at the same time. Each of these connections will try to insert <em>a lot</em> of entities. This can easily bring down a MongoDB cluster and make it unresponsive. This is exactly what happened to us. &#x1F642;</p><p>We have an M10 MongoDB cluster in Atlas Cloud with auto-scaling turned on. When the cluster CPU usage reaches around 90% for 20 minutes, it starts the auto-scaling process. This works well on constant loads without spikes, but in our case, we had a costly operation that created a lot of connections to the database. So the script triggered auto-scaling, which further exacerbated the situation, and the database became unavailable.</p><p>Some deal batches were actually processed in time, but we had several big batches that caused the problem. There were at most 366k events in one of these batches, splitting them in chunks of 5k would mean at most 73 promises. These 73 promises took all of the cluster&apos;s resources.</p><p>Big shoutout to the MongoDB support team. We reached out, and they quickly resolved the issue by restarting the cluster. We then proceeded to improve the script.</p><p>Here is our final attempt:</p><pre><code class="language-typescript">let batchIndex = 0;

while (true) {
  const deals = await prisma.deal.findMany({
    take: BATCH_SIZE,
    ...(cursor ? { cursor: { id: cursor }, skip: 1 } : {}),
    orderBy: { id: &apos;asc&apos; },
  });

  if (deals.length === 0) break;

  const newDealsToCreate = deals.map((deal) =&gt; convertDealToNewDeal(deal));
  const newDealEventsToCreate = deals.flatMap((deal) =&gt;
    convertDealToDealEvents2(deal)
  );

  cursor = deals[deals.length - 1].id;
  batchIndex += 1;

  const newDealEventsBatches: any[] = [];
  for (let i = 0; i &lt; newDealEventsToCreate.length; i += EVENTS_BATCH_SIZE) {
    newDealEventsBatches.push(
      newDealEventsToCreate.slice(i, i + EVENTS_BATCH_SIZE)
    );
  }

  try {
    await prisma.newDeal.createMany({
      data: newDealsToCreate,
    });
    for (const newDealEventsBatch of newDealEventsBatches) {
      await prisma.newDealEvent.createMany({ data: newDealEventsBatch });
    }
  } catch (error) {
    console.error(&apos;Error creating newDeal and newDealEvent entities:&apos;, error);
    errors.push({
      batchIndex,
      failedDealIds: deals.map((deal) =&gt; deal.id),
      error: {
        message: error.message,
        stack: error.stack,
      },
    });
  }
}</code></pre><p>This is the correct way to process batched data in MongoDB. We make a paginated call to fetch deals using a cursor. A cursor here is just an ID of the last deal in the batch. Then we split the deal events in batches and <code>await</code> inserting them sequentially. No multiple simultaneous DB connections this time. </p><p>We <code>try/catch</code> here and later store the result to JSON file to make sure we keep track of every failed batch so we can re-run the script for that batch if necessary.</p><p>Worked like a charm.</p><h2 id="prisma-extensions">Prisma Extensions</h2><p>Now that we have the data in the MongoDB, time to do some operations on it. Each NewDeal creation should also create a NewDealEvent. And updates should create events too. We need to ensure the atomicity of these operations. That is when there is an error during deal creation, no event should be created, and vice versa. To ensure this, we need to do this in a transaction. </p><p>However, transactions in Prisma are not <a href="https://www.prisma.io/blog/how-prisma-supports-transactions-x45s1d5l0ww1#how-prisma-supports-database-transactions-today">&quot;traditional&quot;</a> in that sense. If you try to use <code>prisma.$transaction</code> to update some documents in MongoDB, it will work. But if there is a second transaction trying to update the same rows while the first is not finished, the second transaction will fail with a write conflict error. That was the complication mentioned above that we found during the write shadowing period.</p><p>But Prisma saves the day with <em>nested queries.</em> Nester queries are a way of creating an associated entity in one go, along with the main entity. Perfect for our goal of creating both NewDeal and NewDealEvent.</p><p>Prisma allows you to write custom extension methods for its models, which is exactly the feature that we need. We decided to use it and write our own methods that atomically create NewDeal and NewDealEvent, update NewDeal and create an event, and so on.</p><h3 id="creating-the-extensions-with-nested-queries">Creating the Extensions with Nested Queries</h3><pre><code class="language-typescript">export type NewDealUncheckedCreateArgs = Omit&lt;Prisma.NewDealCreateArgs, &apos;data&apos;&gt; &amp; {
  data: Prisma.NewDealUncheckedCreateInput;
};

export function createDealExtension(prisma: PrismaClient) {
  return {
    model: {
      newDeal: {
        /**
         * This method creates a new NewDeal entity and a corresponding NewDealEvent of type CREATE.
         */
        async createEntityAndEvent(
          args: NewDealUncheckedCreateArgs,
          options?: { userId?: string }
        ) {
          return await prisma.newDeal.create({
            ...args,
            data: {
              ...args.data,
              newDealEvents: {
                create: {
                  type: NewDealEventType.CREATE,
                  fieldChanges: args.data as InputJsonValue,
                  userId: options?.userId,
                },
              },
            },
          });
        },
        // ... other methods
      },
    },
  };
}</code></pre><p>Notice <code>NewDealUncheckedCreateArgs</code> type that contains the modified <code>data</code> property. That is to disallow developers from using nested queries in <code>createEntityAndEvent</code> function calls. For example, code like this would be disallowed by the type-checker:</p><pre><code class="language-typescript">await prisma.newDeal.createEntityAndEvent({
  data: {
    contact: {
      type: &apos;HWP&apos;,
      create: {
        name: &apos;New contact&apos;,
        ...
      },
      ...
    },
  },
});</code></pre><p>That is to ensure developers do not create new entities in this query. That would mess up ContactEvents structure by creating a Contact entity without ContactEvent. Instead, here it is encouraged to create Contact separately and pass its id like that:</p><pre><code class="language-typescript">const { id } = await prisma.contact.createEntityAndEvent(...);
await prisma.newDeal.createEntityAndEvent({
  data: {
    type: &apos;HWP&apos;,
    contactId: id,
    ...
  },
});</code></pre><h3 id="usage-of-extensions">Usage of Extensions</h3><p>The interface of the extensions is specifically very close to the corresponding <code>create</code>, <code>update</code> and other methods of original <code>prisma.newDeal</code>. So it is easy to use:</p><pre><code class="language-typescript">await prisma.newDeal.createEntityAndEvent({
  name: &apos;New Deal&apos;,
  type: &apos;HWP&apos;, // Stands for hybrid heatpump (warmtepomp in Dutch)
  ... other fields,
});</code></pre><h2 id="performance-before-and-after">Performance Before and After</h2><p>Now we can delete the old aggregation, Deal view, and DealEventStream collections, leaving only NewDeal and NewDealEvents collections. Of course, for cleanliness sake they will be renamed to Deal and DealEvents later.</p><p>The same query that took 8 seconds now takes 204 milliseconds on new collections (39x improvement):</p><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2025/05/image--3-.png" class="kg-image" alt="The Real Deal: Optimizing Event-Sourcing with MongoDB and Prisma" loading="lazy" width="685" height="387" srcset="https://tonyistomin.xyz/content/images/size/w600/2025/05/image--3-.png 600w, https://tonyistomin.xyz/content/images/2025/05/image--3-.png 685w"></figure><pre><code class="language-typescript">// Really fast now
await prisma.newDeal.find({ where: { gasConsumption: 123 }});</code></pre><h2 id="key-takeaways">Key Takeaways</h2><blockquote>Aggregating entity state on every read is slow</blockquote><p>The better approach is to store the latest state of the entity, and reconstruct previous version from events on demand.</p><!--kg-card-begin: markdown--><blockquote>
<p>Be careful with Promise.all calls when doing batch operation in MongoDB</p>
</blockquote>
<!--kg-card-end: markdown--><p>This can bring database down quite easily spamming a lot of connections.</p><blockquote>Have a safe &quot;shadowing&quot; period during big architectural changes.</blockquote><p>Do not do everything in one commit; gradually move to new architecture step-by-step. Have a backup plan to go back one step during dangerous transitions. Often validate consistency of your data while you are between steps.</p>]]></content:encoded></item><item><title><![CDATA[About me]]></title><description><![CDATA[<p>Hi, my name is Anton Istomin. I am a senior JS/TS, Node.js, React full-stack software developer.</p><p>If you are looking for my CV, <a href="https://cv.tonyistomin.xyz">you can find it here</a>.</p><p>The preferred way of contact is via email <code>istomanton@gmail.com</code>.</p><p><a href="https://tonyistomin.xyz/links/">Other useful links.</a> </p>]]></description><link>https://tonyistomin.xyz/about-me/</link><guid isPermaLink="false">64ac798fd2bbd3000136dcc7</guid><dc:creator><![CDATA[Anton Istomin]]></dc:creator><pubDate>Mon, 10 Jul 2023 23:12:07 GMT</pubDate><content:encoded><![CDATA[<p>Hi, my name is Anton Istomin. I am a senior JS/TS, Node.js, React full-stack software developer.</p><p>If you are looking for my CV, <a href="https://cv.tonyistomin.xyz">you can find it here</a>.</p><p>The preferred way of contact is via email <code>istomanton@gmail.com</code>.</p><p><a href="https://tonyistomin.xyz/links/">Other useful links.</a> </p>]]></content:encoded></item><item><title><![CDATA[Macbook touch bar for git in iTerm2]]></title><description><![CDATA[<p>I would like to share with you my settings for git in iTerm2.</p><p>You can use <code>git status</code>, <code>git add .</code>, <code>git commit -m &quot;&quot;</code>, <code>git pull</code>, <code>git push</code>, <code>git switch main</code>, <code>git switch master</code> and <code>git switch develop</code> just by clicking one of these cuties.</p><p>Configuration is pretty straightforward,</p>]]></description><link>https://tonyistomin.xyz/macbook-touch-bar-in-iterm2/</link><guid isPermaLink="false">623e46c629b9140001571b1c</guid><dc:creator><![CDATA[Anton Istomin]]></dc:creator><pubDate>Fri, 25 Mar 2022 23:03:34 GMT</pubDate><media:content url="https://tonyistomin.xyz/content/images/2022/03/iterm-touch-bar-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://tonyistomin.xyz/content/images/2022/03/iterm-touch-bar-1.jpg" alt="Macbook touch bar for git in iTerm2"><p>I would like to share with you my settings for git in iTerm2.</p><p>You can use <code>git status</code>, <code>git add .</code>, <code>git commit -m &quot;&quot;</code>, <code>git pull</code>, <code>git push</code>, <code>git switch main</code>, <code>git switch master</code> and <code>git switch develop</code> just by clicking one of these cuties.</p><p>Configuration is pretty straightforward, but it can take some time to figure out the first time you try it. So here is a little guide.</p><p>Open <code>Profiles</code> -&gt; <code>Open profiles</code> -&gt; Choose a profile -&gt; <code>Edit profiles</code> -&gt; <code>Keys</code>.</p><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2022/03/image.png" class="kg-image" alt="Macbook touch bar for git in iTerm2" loading="lazy" width="1810" height="1332" srcset="https://tonyistomin.xyz/content/images/size/w600/2022/03/image.png 600w, https://tonyistomin.xyz/content/images/size/w1000/2022/03/image.png 1000w, https://tonyistomin.xyz/content/images/size/w1600/2022/03/image.png 1600w, https://tonyistomin.xyz/content/images/2022/03/image.png 1810w" sizes="(min-width: 720px) 720px"></figure><p> Then choose <code>Add Touch Bar Item</code>, label it <code>git status</code>, choose action <code>Send text with vim special characters</code>. Then write <code>git status\n</code> in the bottom field.</p><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2022/03/image-1.png" class="kg-image" alt="Macbook touch bar for git in iTerm2" loading="lazy" width="894" height="320" srcset="https://tonyistomin.xyz/content/images/size/w600/2022/03/image-1.png 600w, https://tonyistomin.xyz/content/images/2022/03/image-1.png 894w" sizes="(min-width: 720px) 720px"></figure><p>Repeat for every other git command you want. For <code>git commit</code> write <code>git commit -m &quot;&quot;\u0002</code> in the bottom field. This will send the text <code>git commit -m &quot;&quot;</code> to the console and then send a left arrow so the cursor is inside the quotes. Then you can write the commit message and press <code>Enter</code>.</p><p>Then choose <code>View</code> -&gt; <code>Customize Touch Bar...</code> and drag new commands to the touch bar. </p><p>That is all! Hope you find it useful.</p>]]></content:encoded></item><item><title><![CDATA[Integer partitions and threshold graphs]]></title><description><![CDATA[<p>In this post we are going to explore integer partitions and its Ferrer (Young) diagram properties and introduce integer partition lattice. Also we are going to explore graphical integer partitions (for which there exists a graph), their position in a lattice and threshold graphs which correspond to maximal graphical partitions.</p>]]></description><link>https://tonyistomin.xyz/integer-partitions-and-threshold-graphs/</link><guid isPermaLink="false">61b9f04685b074000123a9cd</guid><dc:creator><![CDATA[Anton Istomin]]></dc:creator><pubDate>Sun, 30 May 2021 12:58:43 GMT</pubDate><media:content url="https://tonyistomin.xyz/content/images/2020/08/Natural-Partition-Lattice-NPL--logo4-.png" medium="image"/><content:encoded><![CDATA[<img src="https://tonyistomin.xyz/content/images/2020/08/Natural-Partition-Lattice-NPL--logo4-.png" alt="Integer partitions and threshold graphs"><p>In this post we are going to explore integer partitions and its Ferrer (Young) diagram properties and introduce integer partition lattice. Also we are going to explore graphical integer partitions (for which there exists a graph), their position in a lattice and threshold graphs which correspond to maximal graphical partitions. Kohnert&apos;s criterion is used to determine if a partition is graphical and in edge cases maximal (corresponding to threshold graph). The C++ library which is based on the material discussed in this post includes algorithms that find threshold graph from specified graph, maximal graphical integer partition from specified graphical partition, shortest maximizing chain and other. These algorithms use increasing edge rotation for graphs.</p><h1 id="integer-partitions">Integer partitions</h1><p>What is integer partition? The definition is extremely simple: it is sum of numbers resulting in an integer: \( 10 = 1+3+6\)</p><p>You can also write it down as a descending sequence of numbers \( (6, 3 , 1)\). So for arbitrary integer n its integer partition is:</p><p>\[ (a_1, a_2, a_3, ..., a_k), where &#xA0;n = a_1 + a_2 + a_3 + ... + a_k &#xA0;and &#xA0;a_1 \geq a_2 \geq a_3 \geq ... \geq a_k\]</p><p>Ferrers or Young diagram for integer partition is a sequence of columns consisting of blocks, where first column corresponds to \(a_1\), second to \(a_2\) and so on.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/06/image.png" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>Ferrers diagram for (6, 3, 1)</figcaption></figure><p>An ascending block movement is an operation which transforms one valid integer partition into another in a specific way:</p><p>\[ (a_1, ..., a_i + 1, ..., a_j - 1, ..., a_k), where &#xA0;a_1 \geq a_2 \geq a_3 \geq ... \geq a_k &#xA0;and &#xA0;i &lt; j \]</p><p>For example: \( (6, 3, 1) \rightarrow (7, 3) \)</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/Screenshot-2020-08-15-at-21--2-.png" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>Ascending block movement</figcaption></figure><p>A basic ascending block movement is that which cannot be split into two or more ascending block movements. The block movement above was not basic as it can be split in two ascending block movements: \((6, 3, 1) \rightarrow (6, 4) \rightarrow (7, 3)\). An example of a basic block movement is: \( (2, 1, 1, 1) \rightarrow (2, 2, 1) \):</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/image.png" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>Basic ascending block movement</figcaption></figure><p>Note that moving the fourth block to the third position is impossible since it would give us invalid integer partition \( (2, 1, 2, 1) \). It is invalid because \( a_2 = 1 &lt; a_3 = 2 \). Obviously, the same way we can define descending basic block movement. I will skip definition for brevity.</p><p>Now let&apos;s define partial order on the integer partitions set. Let \( \lambda_1 \) and \( \lambda_2 \) be integer partitions for some number \( n \). </p><p>\[ \lambda_1 \leq \lambda_2 \Leftrightarrow \exists m_1, ..., m_k: m_1(\lambda_1) = a_2, m_2(a_2) = a_3, ..., m_k(a_k) = \lambda_2 \]</p><p>where \( m_1, ..., m_k \) &#x2013; basic ascending block movements and \(a_2, ..., a_k \) &#x2013; valid integer partitions of \( n \). This partially ordered set will also be a natural partition lattice for \(n\) which we will call \(NPL(n)\).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/NPL-10-.PNG" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>\(NPL(10)\)</figcaption></figure><p>The green marked elements are called maximal graphical partitions, which will be described in details later. If we define an operation of adding the block in the end like this:</p><p>\[ (a_1, a_2, ..., a_k) \longrightarrow (a_1, a_2, ...,a_k, 1) \]</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/partition1.PNG" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>Operation of adding the block in the end of a partition</figcaption></figure><p>And extend operator \(\leq\) like this:</p><p>\[ \lambda_1 \leq \lambda_2 \Leftrightarrow \exists f_1, ..., f_k: f_1(\lambda_1) = a_2, f_2(a_2) = a_3, ..., f_k(a_k) = \lambda_2 \]</p><p>where \(\lambda_1\) and \(\lambda_2\) are integer partitions for some integers \(n\) and \(m\), &#xA0;\(f_1...f_k\) each is either ascending basic block movement or block addition to the right side and \(a_2...a_k\) are valid integer partitions. </p><p>Then we get natural partition lattice for all integers (\(NPL\)):</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/Natural-Partition-Lattice-NPL.PNG" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>Natural Partition Lattice (\(NPL\))</figcaption></figure><p>Now let&apos;s explore some properties of Ferrer&apos;s diagrams of integer partitions:</p><p>The largest square of blocks contained in a Ferrer&apos;s diagram is called &quot;Durfee square&quot;.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/Durfee-square.png" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>Durfee square</figcaption></figure><p>The top row of the Durfee square and above is called &quot;head&quot; and is denoted as \(hd(\lambda)\) of the integer partition.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/head-1.png" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>Head</figcaption></figure><p>The blocks to the right of the Durfee square are called &quot;conjugate tail&quot;. This is denoted as \(tl^{\ast}(\lambda)\).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/conjugate-tail.png" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>Conjugate tail</figcaption></figure><p>A conjugate integer partition is a partition that you get if you mirror the original partition across bottom-left to top-right diagonal. Conjugate to \(\lambda = (6, 3 , 1, 1)\) would be \(\lambda^{\ast} = (4, 2, 2, 1, 1, 1)\). And the tail of the \(\lambda = (6, 3 , 1, 1)\) then would be \((2)\). In other words \(tl(\lambda) = (2)\).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/conjugate.png" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>The conjugate to the partition \(\lambda\)</figcaption></figure><p>We call the partition <em>graphical</em> if there exists graph \(G\) such that \((v_1, v_2, ..., v_k)\) is a valid integer partition where \(v_i\) are degrees of vertices of \(G\) and \(v_1 \geq v_2 \geq ... \geq v_k\). The most interesting fact is probably the Kohnert&apos;s criterion and its consequences. </p><h1 id="kohnert-s-criterion">Kohnert&apos;s criterion</h1><p>An even integer partition \(\lambda\) is graphical if and only if \(hd(\lambda) \leq tl(\lambda)\).</p><p>Also, \(\lambda\) is called maximal graphical partition if \(hd(\lambda) = tl(\lambda)\). Scroll up to \(NPL(10)\) and look at the highlighted partitions: they are all maximal graphical.</p><h1 id="threshold-graphs">Threshold graphs</h1><p>By definition a threshold graph is a graph that can be constructed from a one-vertex graph by repeated applications of the following two operations:</p><ol><li>Addition of a single isolated vertex to the graph.</li><li>Addition of a single dominating vertex to the graph, i.e. a single vertex that is connected to all other vertices.</li></ol><p>Let&apos;s define simple operations on graphs called edge rotations:</p><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2020/08/edge_rotation.png" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"></figure><p>A triple of vertices \(A, B, C\) in a graph \(G = &lt;V, E&gt;\) such that \(AB \in E\) and &#xA0; &#xA0; \(BC \not\in E\) is called increasing (or lifting) if \(deg(A) \leq deg(C)\) and decreasing (or lowering) if \(deg(A) \geq 2 + deg(C)\).</p><p>If there is no increasing edge rotation in graph \(G\) it comes out that \(G\) is a threshold graph. If we consider integer partition corresponding to G we will find out that it is maximal graphical.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/image--1-.png" class="kg-image" alt="Integer partitions and threshold graphs" loading="lazy"><figcaption>Threshold graph and its partition</figcaption></figure><p>In the example above the head of the integer partition is equal to its tail therefore it is maximal graphical and corresponds to the threshold graph.</p><h1 id="algorithm">Algorithm</h1><p>The question rises whether we can transform a given graph into a threshold graph using increasing edge rotations, and if we can what is the shortest sequence of such transformations. </p><p>It comes out we can always do that because increasing edge rotation always corresponds to ascending block movement and transforms given graphical partition into greater (in the lattice).</p><p>Using simple <a href="https://github.com/ReFruity/threshold-graph/blob/master/algorithm.cpp#L350">breadth first search</a> (BFS) and increasing edge rotation we can find every shortest transformation sequence that converts our graph into a threshold graph.</p><p>You can find implementation of this algorithm and others in this C++ <a href="https://github.com/ReFruity/threshold-graph">threshold graph and integer partition library</a>.</p><h1 id="references">References</h1><ol><li><a href="http://www.mathnet.ru/php/archive.phtml?wshow=paper&amp;jrnid=semr&amp;paperid=1216&amp;option_lang=eng">On maximal graphical partitions that are the nearest to a given graphical partition</a> (in russian language) - <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=22902">V. A. Baransky</a>, <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=66357">T. A. Senchonok</a></li><li><a href="https://drive.google.com/file/d/1AVsJY_IrHT1AQmgjmbmqaUQf37JtjTVe/view?usp=sharing">On threshold graphs and implementation of graphical partitions</a> (in russian language) - <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=22902">V. A. Baransky</a>, <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=66357">T. A. Senchonok</a></li><li><a href="https://drive.google.com/file/d/1SJOO708xH3A-mSMhpyhnZlGu_Rw7L1E8/view?usp=sharing">On natural number partition lattice (2015)</a> (in russian language) - <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=22902">V. A. Baransky</a>, <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=66357">T. A. Senchonok</a>, T. A. Koroleva</li><li><a href="https://drive.google.com/file/d/127z7UpyfBf_JiaIdhJGHgfZCxOvpyXuk/view?usp=sharing">On lattice of partitions of all natural numbers (2016)</a> (in russian language) - <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=22902">V. A. Baransky</a>, <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=66357">T. A. Senchonok</a>, T. A. Koroleva</li><li><a href="https://drive.google.com/file/d/17293FWI2ayGlf8MTYe5Mo0IDbM72hkzH/view?usp=sharing">New algorithm of generation of graphical sequences (2016)</a> (in russian language) - <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=22902">V. A. Baransky</a>, <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=66357">T. A. Senchonok</a>, T. I. Nadymova</li><li><a href="https://drive.google.com/file/d/1Ll3U0WxcTSmTMAg08FMVfvCf53f56Yk8/view?usp=sharing">On maximal graphical partitions (2017)</a> (in russian language) - <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=22902">V. A. Baransky</a>, <a href="http://www.mathnet.ru/php/person.phtml?option_lang=eng&amp;personid=66357">T. A. Senchonok</a></li><li><a href="https://drive.google.com/file/d/1h47yr5toQlzFaDUsrqlXzR2m2Nx6m2Dl/view?usp=sharing">Natural numbers partitions and chromatic uniqueness of graphs (2008)</a> (in russian language) - T. A. Koroleva</li><li><a href="https://drive.google.com/file/d/1ftyh93q56EYV_U2EXMGF7AxLYOm82wN0/view?usp=sharing">Graphical sequences and their generation algorithms (2016)</a> (in russian language) - T. I. Nadymova </li></ol><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/Threshold_graph">Threshold graph</a>, <a href="https://en.wikipedia.org/wiki/Partition_(number_theory)">Integer partition</a>, <a href="https://en.wikipedia.org/wiki/Lattice_(order)#Lattices_as_partially_ordered_sets">Lattices as partially ordered sets</a></p>]]></content:encoded></item><item><title><![CDATA[A little update]]></title><description><![CDATA[<p>I write this post to inform you that this blog is not dead :) I now have 2 side-projects which we will talk about in this blog later.</p><p>First one is my unfinished master&apos;s degree work on graphical integer partitions and threshold graphs. Here is little spoiler (natural partition</p>]]></description><link>https://tonyistomin.xyz/a-little-update/</link><guid isPermaLink="false">61b9f04685b074000123a9cf</guid><category><![CDATA[update]]></category><dc:creator><![CDATA[Anton Istomin]]></dc:creator><pubDate>Tue, 25 Aug 2020 19:10:01 GMT</pubDate><media:content url="https://tonyistomin.xyz/content/images/2020/08/facebook-cover-horizon-space.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://tonyistomin.xyz/content/images/2020/08/facebook-cover-horizon-space.jpg" alt="A little update"><p>I write this post to inform you that this blog is not dead :) I now have 2 side-projects which we will talk about in this blog later.</p><p>First one is my unfinished master&apos;s degree work on graphical integer partitions and threshold graphs. Here is little spoiler (natural partition lattice for number 10):</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/NPL-10-.PNG" class="kg-image" alt="A little update" loading="lazy"><figcaption>NPL(10)</figcaption></figure><p>Second one is basically semi-automatic problem generator for the game of Go. It uses <a href="http://fuego.sourceforge.net/">fuego</a> as it&apos;s engine and tries to generate a game tree using minimax algorithm. Then you can upload this file to <a href="goproblems.com">goproblems.com</a> and it will work.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2020/08/image-1.png" class="kg-image" alt="A little update" loading="lazy"><figcaption>Go problem for you. Black to move.</figcaption></figure><p>That&apos;s all for now folks. See you soon!</p><p>Follow telegram channel for updates: <a href="https://t.me/refruity_xyz">t.me/refruity_xyz</a></p>]]></content:encoded></item><item><title><![CDATA[Game of Life and AssemblyScript]]></title><description><![CDATA[Step by step tutorial to writing Game of Life in typescript then compiling it using AssemblyScript. ]]></description><link>https://tonyistomin.xyz/assembly-script/</link><guid isPermaLink="false">61b9f04685b074000123a9cb</guid><category><![CDATA[assemblyscript]]></category><category><![CDATA[webassembly]]></category><category><![CDATA[tutorial]]></category><category><![CDATA[game of life]]></category><category><![CDATA[typescript]]></category><dc:creator><![CDATA[Anton Istomin]]></dc:creator><pubDate>Sun, 11 Aug 2019 12:56:44 GMT</pubDate><media:content url="https://tonyistomin.xyz/content/images/2019/08/1280px-LLVM_Logo.svg.png" medium="image"/><content:encoded><![CDATA[<img src="https://tonyistomin.xyz/content/images/2019/08/1280px-LLVM_Logo.svg.png" alt="Game of Life and AssemblyScript"><p>Originally I planned to write about <a href="https://wasi.dev/">WASI</a> (that&apos;s why <a href="https://en.wikipedia.org/wiki/LLVM">LLVM</a> logo is in background). It is system interface for WebAssembly. Basically a means to run <code>wasm</code> files without browser and give them access to system stuff. But it turned out that it was kind of boring topic to write about. Just use <a href="https://github.com/CraneStation/wasmtime">wasmtime</a> (which I could not compile) or <a href="https://github.com/wasmerio/wasmer">wasmer</a> (which I ran successfully with two commands) and execute <code>wasm</code> files if you desire so.</p><p>But today we are going to explore the paths of <a href="https://github.com/AssemblyScript/assemblyscript">AssemblyScript</a> and <a href="https://webassembly.org/">WebAssembly</a>. We are going to learn how to compile <a href="https://www.typescriptlang.org/">TypeScript</a> into WebAssembly and also compare performance of compiled JavaScript and WebAssembly by implementing <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">Game of Life</a>. &#xA0;We will use Ubuntu 18.04 as our OS and <a href="https://webpack.js.org/">webpack</a> as our module bundler.</p><p>tl;dr here is codepen with examples:</p><figure class="kg-card kg-embed-card"><iframe id="cp_embed_ymPdpo" src="https://codepen.io/ReFruity/embed/preview/ymPdpo?height=300&amp;slug-hash=ymPdpo&amp;default-tabs=js,result&amp;host=https://codepen.io" title="Game of Life" scrolling="no" frameborder="0" height="300" allowtransparency="true" class="cp_embed_iframe" style="width: 100%; overflow: hidden;"></iframe></figure><p>Run the pen and you will see that WebAssembly gives around 60 FPS, whereas JS gives only 32 FPS. You can also click and drag to add extra chaos to the field.</p><p>Alright, back to the tutorial. Let&apos;s start with dependencies. You can install latest node using <a href="https://github.com/creationix/nvm#installation-and-update">nvm</a>. Then initialize new repo by adding <code>package.json</code>:</p><pre><code class="language-javascript">{
  &quot;name&quot;: &quot;assemblyscript-game-of-life&quot;,
  &quot;version&quot;: &quot;1.0.0&quot;,
  &quot;description&quot;: &quot;&quot;,
  &quot;main&quot;: &quot;src/index.js&quot;,
  &quot;scripts&quot;: {
    &quot;start&quot;: &quot;webpack-dev-server&quot;,
    &quot;build&quot;: &quot;npm run asbuild &amp;&amp; webpack-cli&quot;,
    &quot;asbuild&quot;: &quot;asc assembly/index.ts -b compiled/gameoflife.wasm --validate --optimize --importMemory --use Math=JSMath&quot;
  },
  &quot;license&quot;: &quot;ISC&quot;,
  &quot;devDependencies&quot;: {
    &quot;assemblyscript&quot;: &quot;github:AssemblyScript/assemblyscript&quot;,
    &quot;copy-webpack-plugin&quot;: &quot;^5.0.4&quot;,
    &quot;wasm-loader&quot;: &quot;^1.3.0&quot;,
    &quot;webpack&quot;: &quot;^4.38.0&quot;,
    &quot;webpack-cli&quot;: &quot;^3.3.6&quot;,
    &quot;webpack-dev-server&quot;: &quot;^3.7.2&quot;
  },
  &quot;dependencies&quot;: {}
}</code></pre><p>Here we see <code>start</code> script which is used for developing the application, it opens webpack-dev-server on port <code>9000</code>. Also we see two build scripts, first is <code>build</code> and it tells webpack to compile it&apos;s <code>bundle.js</code>, second is <code>asbuild</code> which tells AssemblyScript to compile <code>ts</code> file into <code>wasm</code> file. Let&apos;s stop for a minute on the second script. It has <a href="https://docs.assemblyscript.org/details/compiler">couple of arguments</a>. Interesting ones are <code>importMemory</code> and <code>use Math</code>. Former will allow us to import memory object and use it inside and outside <code>wasm</code> file, latter is needed to import <code><a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math">Math</a></code> object.</p><p>Now you can run <code>npm install</code> to install dependencies. Create files <code>assembly/index.ts</code> and <code>src/index.ts</code>. Also create following <code>webpack.config.js</code>:</p><pre><code class="language-javascript">const path = require(&apos;path&apos;);
const CopyWebpackPlugin = require(&apos;copy-webpack-plugin&apos;);

module.exports = {
  entry: &apos;./src/index.js&apos;,
  devtool: &apos;source-map&apos;,
  resolve: {
    extensions: [ &apos;.js&apos; ]
  },
  module: {
    defaultRules: [
      {
        type: &quot;javascript/auto&quot;,
        resolve: {}
      },
      {
        test: /\.json$/i,
        type: &quot;json&quot;
      }
    ],
    rules: [
      {
        test: /\.wasm$/,
        use: &apos;wasm-loader&apos;
      }
    ],
  },
  output: {
    filename: &apos;bundle.js&apos;,
    path: path.resolve(__dirname, &apos;dist&apos;)
  },
  plugins: [
    new CopyWebpackPlugin([{ from: path.resolve(__dirname, &apos;index.html&apos;), to: path.resolve(__dirname, &apos;dist&apos;) }])
  ],
  devServer: {
    contentBase: path.join(__dirname, &apos;dist&apos;),
    compress: true,
    port: 9000
  }
};</code></pre><p><code>defaultRules</code> is shamelessly copy-pasted from this <a href="https://github.com/webpack/webpack/issues/6725">github issue</a>. I have no idea how that works but that&apos;s not the point. The point is: we can now import <code>wasm</code> files!</p><p>But let&apos;s start simple. Create <code>index.html</code>:</p><pre><code class="language-html">&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;meta charset=&quot;UTF-8&quot;&gt;
    &lt;title&gt;Game of Life&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
  &lt;script type=&quot;text/javascript&quot; src=&quot;bundle.js&quot;&gt;&lt;/script&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre><p>In <code>src/index.js</code> write:</p><pre><code class="language-javascript">import GameOfLife from &apos;../compiled/gameoflife.wasm&apos;

async function start() {
  const memory = new WebAssembly.Memory({ initial: 256 })
  const importObject = {
    env: {
      abort: () =&gt; {},
      memory
    },
    imports: { f }
  }

  const game = await GameOfLife(importObject)
  const { add } = game.instance.exports

  function f(x) {
    console.log(&apos;f&apos;, x)
  }

  console.log(add(12, 30))
}

start()</code></pre><p>To be sure, <code>f</code> is a bad name for a function. But on each step of building this project it helped me so much that it grew on me and I don&apos;t want to change it now. Now for <code>assembly/index.ts</code>:</p><pre><code class="language-javascript">@external(&apos;imports&apos;, &apos;f&apos;)
declare function f(x: i32): void

f(1)

export function add(x: i32, y: i32): i32 {
  return x + y
}</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2019/08/Screenshot-from-2019-08-05-20-53-46.png" class="kg-image" alt="Game of Life and AssemblyScript" loading="lazy"><figcaption>This is the file structure we will be using for our project</figcaption></figure><p>The moment of truth: run <code>npm run asbuild</code>, then <code>npm start</code> and open <code>localhost:9000</code> in your favorite (not IE) browser. Open developers tools (F12 in chrome) and look at the result.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://tonyistomin.xyz/content/images/2019/08/Screenshot-from-2019-08-05-21-43-47.png" class="kg-image" alt="Game of Life and AssemblyScript" loading="lazy"><figcaption>Glorious</figcaption></figure><p>The reason this result is important is that we were able to use core mechanics like import function <code>f</code> from js to wasm and run that function, export and run function <code>add</code> from <code>wasm</code> and also load <code>wasm</code> file using <a href="https://github.com/ballercat/wasm-loader">wasm-loader</a>.</p><p>Do you like modules? I do! Let&apos;s extract <code>index.js</code> as a separate <code>GameOfLifeWebAssembly.js</code> module. Create a new file with:</p><pre><code class="language-javascript">import GameOfLife from &apos;../compiled/gameoflife.wasm&apos;

export default async () =&gt; {
  const memory = new WebAssembly.Memory({ initial: 256 })
  const importObject = {
    env: {
      abort: () =&gt; {},
      memory
    },
    imports: { f }
  }

  const game = await GameOfLife(importObject)
  const { add } = game.instance.exports

  function f(x) {
    console.log(&apos;f&apos;, x)
  }

  return { add }
}</code></pre><p>And also introduce <code>Main</code> module in <code>index.js</code>:</p><pre><code class="language-javascript">import GameOfLifeWebAssembly from &apos;./GameOfLifeWebAssembly&apos;

async function Main() {
  const game = await GameOfLifeWebAssembly()
  console.log(game.add(12, 30))
}

window.addEventListener(&apos;load&apos;, Main)</code></pre><p>You can run the application again to make sure it produces the same output as before the refactoring. Now add a canvas to your <code>index.html</code> <code>&lt;body&gt;</code>:</p><pre><code class="language-html">&lt;canvas id=&quot;canvas&quot;&gt;&lt;/canvas&gt;</code></pre><p>We will need this canvas to display alive and dead cells. 1 cell = 1 pixel on the canvas. Some js code will be needed to display <code><a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint32Array">Uint32Array</a></code> on the canvas. In your <code>index.js</code>:</p><pre><code class="language-javascript">  const width = 100
  const height = 100

  const canvas = document.getElementById(&apos;canvas&apos;)
  canvas.width  = width
  canvas.height = height

  const context = canvas.getContext(&apos;2d&apos;)
  const imageData = context.createImageData(width, height)
  const imageDataView = new Uint32Array(imageData.data.buffer)

  imageDataView[0] = 0xFF000000

  context.putImageData(imageData, 0, 0)</code></pre><p>We create new <a href="https://developer.mozilla.org/en-US/docs/Web/API/ImageData">ImageData</a> and set first value to <code>0xFF000000</code>. <code>uint32</code> stands for 32 bit unsigned integer. Unsigned means that first bit does not indicate its sign, so every <code>uint32</code> ranges from <code>0</code> to <code>2^32 - 1 = 4294967295</code>. Each <code>uint32</code> in array represents a pixel on the canvas and is in format <code>ABGR</code> which stands for Alpha, Blue, Green, Red. Alpha channel is the transparency of the color, where <code>0xFF</code> means opaque. To sum it up we get an opaque black pixel. Open <code>localhost:9000</code> and look at the black pixel in the top left of your screen.</p><p>Calling external js functions from wasm is slower than writing and reading shared memory. From js side we will need a memory view in <code>GameOfLifeWebAssembly.js</code>:</p><pre><code class="language-javascript">const memory = new WebAssembly.Memory({ initial: 256 })
const memoryView = new Uint32Array(memory.buffer)
...
return { memoryView }</code></pre><p>From wasm side we will need functions <code>store&lt;u32&gt;</code> and <code>load&lt;u32&gt;</code> in <code>index.ts</code>:</p><pre><code class="language-javascript">@external(&apos;imports&apos;, &apos;width&apos;)
declare const WIDTH: i32

@external(&apos;imports&apos;, &apos;height&apos;)
declare const HEIGHT: i32

@external(&apos;imports&apos;, &apos;f&apos;)
declare function f(x: i32): void

const BLACK = 0xFF000000
const WHITE = 0

function set(x: i32, y: i32): void {
  store&lt;u32&gt;(toPointer(x, y), BLACK)
}

function toPointer(x: i32, y: i32): i32 {
  return (y * WIDTH + x) * 4
}

set(0, 0)
set(0, 1)
</code></pre><p>Don&apos;t forget to import width and height. Also in <code>index.js</code> we will need to copy data from shared memory to canvas. To do this try:</p><pre><code class="language-javascript">  const imageDataView = new Uint32Array(imageData.data.buffer)

  imageDataView.set(game.memoryView.slice(0, width * height))

  context.putImageData(imageData, 0, 0)</code></pre><p>Now open <code>localhost:9000</code> and look carefully. You should be able to spot 2 black pixels in the top left corner.</p><p>Now grab <code><a href="https://github.com/ReFruity/assemblyscript-game-of-life/blob/master/assembly/index.ts">index.ts</a></code> from my github repo, modify <code>GameOfLifeWebAssembly.js</code> like that:</p><pre><code class="language-javascript">import GameOfLife from &apos;../compiled/gameoflife.wasm&apos;

export default async (width, height) =&gt; {
  const memory = new WebAssembly.Memory({ initial: 256 })
  const memoryView = new Uint32Array(memory.buffer)
  const importObject = {
    env: {
      abort: () =&gt; {},
      memory
    },
    imports: { f, width, height },
    Math
  }

  const game = await GameOfLife(importObject)

  console.log(game)
  const { randomize, step } = game.instance.exports

  function f(x) {
    console.log(&apos;f&apos;, x)
  }

  return { randomize, step, memoryView }
}</code></pre><p>To add game loop we will need <code><a href="https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame">requestAnimationFrame</a></code> function. In <code>index.js</code>:</p><pre><code class="language-javascript">  game.randomize()

  function cycle() {
    game.step()
    imageDataView.set(game.memoryView.slice(0, width * height))
    context.putImageData(imageData, 0, 0)
    requestAnimationFrame(cycle)
  }

  cycle()</code></pre><p>Run <code>npm run build</code>. Voila! You have 100x100 working Game of Life on <code>localhost:9000</code>.</p><p>Also to further address performance questions, I added a simple <a href="https://github.com/ReFruity/assemblyscript-game-of-life/blob/master/src/benchmark.js">benchmark</a> in my repo. I make game&apos;s <code>step</code> couple of times on randomized board and measure milliseconds it takes to complete. Results are:</p><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2019/08/Screenshot-from-2019-08-10-12-19-44.png" class="kg-image" alt="Game of Life and AssemblyScript" loading="lazy"></figure><p>JS takes twice as much time as WebAssembly to calculate. This benchmark is representative because rendering on canvas is identical for JS and WebAssembly code.</p><p>Final version with JS/WebAssembly comparison and also painting feature is available on my <a href="https://github.com/ReFruity/assemblyscript-game-of-life">github</a>.</p><p>Fun fact: before <code>@inline</code>-ing some functions in wasm its FPS was comparatively lower than FPS of JS code.</p><p>Important note: official AssemblyScript repository has <a href="https://assemblyscript.github.io/assemblyscript/examples/game-of-life/">Game of Life</a> already implemented. <a href="https://github.com/AssemblyScript/assemblyscript/tree/master/examples/game-of-life">Here</a> is their source code. I learned some mechanics by reading this code. There are <a href="https://medium.com/@carlosbaraza/hopefully-simple-webassembly-starting-guide-9300f5a1c0d7">other</a> <a href="https://blog.openbloc.fr/webassembly-first-steps/#part1">tutorials</a> that helped.</p><p>P.S. Look at <a href="https://jtiscione.github.io/webassembly-wave/index.html">this cool demo</a> with simulation of the wave equation.</p>]]></content:encoded></item><item><title><![CDATA[Writing Discord Bot With Speech Recognition]]></title><description><![CDATA[Automate your discord server using discord bot. Simple step by step tutorial on how to create a bot with voice commands.]]></description><link>https://tonyistomin.xyz/writing-discord-bot/</link><guid isPermaLink="false">61b9f04685b074000123a9c8</guid><category><![CDATA[discord]]></category><category><![CDATA[bot]]></category><category><![CDATA[node]]></category><category><![CDATA[speech recognition]]></category><category><![CDATA[role]]></category><category><![CDATA[assignment]]></category><category><![CDATA[tutorial]]></category><dc:creator><![CDATA[Anton Istomin]]></dc:creator><pubDate>Wed, 06 Mar 2019 12:00:00 GMT</pubDate><media:content url="https://tonyistomin.xyz/content/images/2019/03/Bastion_portrait.png" medium="image"/><content:encoded><![CDATA[<img src="https://tonyistomin.xyz/content/images/2019/03/Bastion_portrait.png" alt="Writing Discord Bot With Speech Recognition"><p>Bots provide a lot of versatility to discord. You can automate certain tasks using them. For example, imagine a server where voice channels correspond to games. Our bot will join user&apos;s voice channel when user starts playing a game and ask him if he wants to be transferred to the right channel. We will be using <a href="https://nodejs.org">node</a> version 8.10.0, <a href="https://cloud.google.com/speech-to-text/">Google speech recognition</a>, <a href="https://www.johnvansickle.com/ffmpeg/">ffmpeg</a> and <a href="https://discord.js.org/">discord.js</a>. You can install node 8.10.0 using <a href="https://github.com/creationix/nvm#installation-and-update">nvm</a>. Also I recommend using latest Ubuntu OS.</p><p>First things first, we want discord API token. There are <a href="https://github.com/reactiflux/discord-irc/wiki/Creating-a-discord-bot-&amp;-getting-a-token">some</a> <a href="https://github.com/Chikachi/DiscordIntegration/wiki/How-to-get-a-token-and-channel-ID-for-Discord">tutorials</a> on the Internet on how to get it. Also you will need to add your discord bot user to your discord server. Then create a file named <code>config.json</code> and paste your API token there:</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">{
  &quot;discordApiToken&quot;: &quot;your-token-here&quot;
}
</code></pre>
<!--kg-card-end: markdown--><p>Then create <code><a href="https://docs.npmjs.com/files/package.json">package.json</a></code> file and add dependencies: </p><!--kg-card-begin: markdown--><pre><code class="language-javascript">{
  &quot;name&quot;: &quot;myawesomebot&quot;,
  &quot;version&quot;: &quot;1.0.0&quot;,
  &quot;description&quot;: &quot;Discord bot with speech recognition&quot;,
  &quot;main&quot;: &quot;index.js&quot;,
  &quot;scripts&quot;: {
    &quot;start&quot;: &quot;node index.js&quot;
  },
  &quot;dependencies&quot;: {
    &quot;@google-cloud/speech&quot;: &quot;^2.1.1&quot;,
    &quot;discord.js&quot;: &quot;https://github.com/discordjs/discord.js.git#123713305ad5a6aa1e5205a53713494009740aef&quot;,
    &quot;node-opus&quot;: &quot;^0.3.1&quot;
  },
  &quot;license&quot;: &quot;ISC&quot;
}
</code></pre>
<!--kg-card-end: markdown--><p>Create <code>index.js</code> file:</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">const Discord = require(&apos;discord.js&apos;)
const config = require(&apos;./config&apos;)

const discordClient = new Discord.Client()

discordClient.on(&apos;ready&apos;, () =&gt; {
  console.log(`Logged in as ${discordClient.user.tag}!`)
})

discordClient.login(config.discordApiToken)
</code></pre>
<!--kg-card-end: markdown--><p>Now we want to test if our bot is set up correctly. Run <code>npm install</code> and <code>npm start</code> and look in your console. There should be message starting with <code>Logged in as</code> and the bot should come online in your discord server. Congratulations! You wrote your first discord bot. But now this bot does nothing so let&apos;s add some functionality. To do this we will need to learn about <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function">async functions</a> which will helps us reduce nesting and make our code prettier:</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">discordClient.on(&apos;presenceUpdate&apos;, async (oldPresence, newPresence) =&gt; {
  console.log(&apos;New Presence:&apos;, newPresence)

  const member = newPresence.member
  const presence = newPresence
  const memberVoiceChannel = member.voice.channel

  if (!presence || !presence.activity || !presence.activity.name || !memberVoiceChannel) {
    return
  }

  const connection = await memberVoiceChannel.join()

  connection.on(&apos;speaking&apos;, (user, speaking) =&gt; {
    if (speaking) {
      console.log(`I&apos;m listening to ${user.username}`)
    } else {
      console.log(`I stopped listening to ${user.username}`)
    }
  })
})
</code></pre>
<!--kg-card-end: markdown--><p>Now try joining a voice channel and then starting a game. The bot should join you and log your presence which includes the name of the game. Also it should detect when you are speaking.</p><p>So far so good. Time for some real speech recognition. To do this we will need <a href="https://cloud.google.com/speech-to-text/docs/quickstart-client-libraries">Google speech API credentials</a> (read the first item in &quot;before you begin&quot; section). Save your credentials in <code>google-credentials.json</code> file in your project folder. After that you can either use <code><a href="https://www.npmjs.com/package/dotenv">dotenv</a></code> or start your app with</p><!--kg-card-begin: markdown--><pre><code>GOOGLE_APPLICATION_CREDENTIALS=&quot;[PATH]&quot; npm start
</code></pre>
<!--kg-card-end: markdown--><p>where <code>[PATH]</code> is a full path to your <code>google-credentials.json</code> file. </p><p>If you use <code>dotenv</code> then create <code>.env</code> file</p><!--kg-card-begin: markdown--><pre><code>GOOGLE_APPLICATION_CREDENTIALS=&quot;google-credentials.json&quot;
</code></pre>
<!--kg-card-end: markdown--><p> in your project folder after installing the package and then add &#xA0; </p><!--kg-card-begin: markdown--><pre><code class="language-javascript">require(&apos;dotenv&apos;).config()
</code></pre>
<!--kg-card-end: markdown--><p>at the start of your <code>index.js</code>.</p><p>Nice. Now let&apos;s try recognizing your beautiful voice. To do this we need one more thing. <code>Discord.js</code> function <code><a href="https://discord.js.org/#/docs/main/stable/class/VoiceReceiver?scrollTo=createPCMStream">createPCMStream</a></code> creates a 16-bit signed PCM, stereo 48KHz stream, but Google speech recognition takes mono input, i.e. 1 channel. So we have to convert 2 channel stream to 1 channel stream. We achieve it by creating a <a href="https://nodejs.org/api/stream.html#stream_implementing_a_transform_stream">transform stream</a>:</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">const { Transform } = require(&apos;stream&apos;)

function convertBufferTo1Channel(buffer) {
  const convertedBuffer = Buffer.alloc(buffer.length / 2)

  for (let i = 0; i &lt; convertedBuffer.length / 2; i++) {
    const uint16 = buffer.readUInt16LE(i * 4)
    convertedBuffer.writeUInt16LE(uint16, i * 2)
  }

  return convertedBuffer
}

class ConvertTo1ChannelStream extends Transform {
  constructor(source, options) {
    super(options)
  }

  _transform(data, encoding, next) {
    next(null, convertBufferTo1Channel(data))
  }
}
</code></pre>
<!--kg-card-end: markdown--><p>We are ready to implement voice recognition:</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">const googleSpeech = require(&apos;@google-cloud/speech&apos;)

const googleSpeechClient = new googleSpeech.SpeechClient()

discordClient.on(&apos;presenceUpdate&apos;, async (oldPresence, newPresence) =&gt; {
  console.log(&apos;New Presence:&apos;, newPresence)

  const member = newPresence.member
  const presence = newPresence
  const memberVoiceChannel = member.voice.channel

  if (!presence || !presence.activity || !presence.activity.name || !memberVoiceChannel) {
    return
  }

  const connection = await memberVoiceChannel.join()
  const receiver = connection.receiver

  connection.on(&apos;speaking&apos;, (user, speaking) =&gt; {
    if (!speaking) {
      return
    }

    console.log(`I&apos;m listening to ${user.username}`)

    // this creates a 16-bit signed PCM, stereo 48KHz stream
    const audioStream = receiver.createStream(user, { mode: &apos;pcm&apos; })
    const requestConfig = {
      encoding: &apos;LINEAR16&apos;,
      sampleRateHertz: 48000,
      languageCode: &apos;en-US&apos;
    }
    const request = {
      config: requestConfig
    }
    const recognizeStream = googleSpeechClient
      .streamingRecognize(request)
      .on(&apos;error&apos;, console.error)
      .on(&apos;data&apos;, response =&gt; {
        const transcription = response.results
          .map(result =&gt; result.alternatives[0].transcript)
          .join(&apos;\n&apos;)
          .toLowerCase()
        console.log(`Transcription: ${transcription}`)
      })

    const convertTo1ChannelStream = new ConvertTo1ChannelStream()

    audioStream.pipe(convertTo1ChannelStream).pipe(recognizeStream)

    audioStream.on(&apos;end&apos;, async () =&gt; {
      console.log(&apos;audioStream end&apos;)
    })
  })
})

</code></pre>
<!--kg-card-end: markdown--><p>Start the bot, join voice channel, start any game and then say something in English. You should see transcription of your words in console. For now we will only be needing words &quot;yes&quot; and &quot;no&quot; to command our bot. </p><p>Note here: it may now seem that the bot doesn&apos;t hear you and doesn&apos;t recognize your words. In <a href="https://github.com/discordjs/discord.js/issues/2929">this github issue</a> they say that discord had (or has) a bug that doesn&apos;t allow for bots to listen to you until you&apos;ve played any sound. So just proceed with the tutorial, we will implement playing sounds before recognition.</p><p>The bot is silent right now, we need it to ask user a question:</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">async function playFile(connection, filePath) {
  return new Promise((resolve, reject) =&gt; {
    const dispatcher = connection.play(filePath)
    dispatcher.setVolume(1)
    dispatcher.on(&apos;start&apos;, () =&gt; {
      console.log(&apos;Playing&apos;)
    })
    dispatcher.on(&apos;end&apos;, () =&gt; {
      resolve()
    })
    dispatcher.on(&apos;error&apos;, (error) =&gt; {
      console.error(error)
      reject(error)
    })
  })
}
...
  const connection = await memberVoiceChannel.join()
  const receiver = connection.receiver

  await playFile(connection, &apos;wrongChannelEn.mp3&apos;)

  connection.on(&apos;speaking&apos;, (user, speaking) =&gt; {
...
</code></pre>
<!--kg-card-end: markdown--><p>You can download <code><a href="https://github.com/ReFruity/EzBot/blob/master/audio/wrongChannelEn.mp3">wrongChannelEn.mp3</a></code> or record your own voice line. If the audio is not playing for you then it is probably because of the missing <code>ffmpeg</code> binaries. You can download them <a href="https://www.johnvansickle.com/ffmpeg/">here</a> for your system. This <a href="https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz">link</a> is for Ubuntu x64. Decompress them and put <code>ffmpeg</code> file in your <code>Path</code>, e.g. <code>/usr/bin</code> folder.</p><p>The last step is to create a mapping between games and channels and to tell our bot to transfer people to the right channel on &quot;yes&quot;. To copy the voice channel ID use earlier <a href="https://github.com/Chikachi/DiscordIntegration/wiki/How-to-get-a-token-and-channel-ID-for-Discord#get-the-channel-id-of-the-discord-text-channel">guide</a>. Since I&apos;m using Ubuntu, I will use &quot;Mines&quot; as the game for this demonstration.</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">const GamesAndChannels = {
  Mines: &apos;[VoiceChannelID]&apos;
}

discordClient.on(&apos;presenceUpdate&apos;, async (oldMember, newMember) =&gt; {
  const memberVoiceChannel = newMember.voiceChannel
  
  if (!newMember.presence || !newMember.presence.game || !memberVoiceChannel) {
    return
  }

  const channelId = GamesAndChannels[newMember.presence.game]
  
  if (!channelId) {
    return
  }
  
  const connection = await memberVoiceChannel.join()
  const receiver = connection.createReceiver()

  await playFile(connection, &apos;wrongChannelEn.mp3&apos;)

  setTimeout(() =&gt; {
    memberVoiceChannel.leave()
  }, 30000)

  connection.on(&apos;speaking&apos;, (user, speaking) =&gt; {
    if (!speaking) {
      return
    }

    console.log(`I&apos;m listening to ${user.username}`)

    // this creates a 16-bit signed PCM, stereo 48KHz stream
    const audioStream = receiver.createPCMStream(user)
    const requestConfig = {
      encoding: &apos;LINEAR16&apos;,
      sampleRateHertz: 48000,
      languageCode: &apos;en-US&apos;
    }
    const request = {
      config: requestConfig
    }
    const recognizeStream = googleSpeechClient
      .streamingRecognize(request)
      .on(&apos;error&apos;, console.error)
      .on(&apos;data&apos;, response =&gt; {
        const transcription = response.results
          .map(result =&gt; result.alternatives[0].transcript)
          .join(&apos;\n&apos;)
          .toLowerCase()
        console.log(`Transcription: ${transcription}`)

        if (transcription === &apos;yes&apos;) {
          connection.channel.members.array().forEach(member =&gt; {
            if (member.user.id !== discordClient.user.id) {
              console.log(`Moving member ${member.displayName} to channel ${channelId}`)
              member.edit({ channel: channelId }).catch(console.error)
              memberVoiceChannel.leave()
            }
          })
        } else if (transcription === &apos;no&apos;) {
          memberVoiceChannel.leave()
        }
      })

    const convertTo1ChannelStream = new ConvertTo1ChannelStream()

    audioStream.pipe(convertTo1ChannelStream).pipe(recognizeStream)

    audioStream.on(&apos;end&apos;, async () =&gt; {
      console.log(&apos;audioStream end&apos;)
    })
  })
})
</code></pre>
<!--kg-card-end: markdown--><p>Done. Now join a wrong voice channel (not with the ID you specified earlier), fire up &quot;Mines&quot; and tell the bot &quot;yes&quot;. You (and your buddies in the voice channel) should be transferred to the voice channel with the specified ID. If you tell the bot &quot;no&quot;, it just leaves the voice channel with sadness on his metal face. Final version of the code is available via <a href="https://gist.github.com/ReFruity/cd09a986fad15f06b83573fecbadf892">this gist</a>.</p><p>In this tutorial I demonstrated a simplified version of EzBot, a bot we developed specifically for our discord server. It is open source and is under ISC license (use however you want). You can find it on my <a href="https://github.com/ReFruity/EzBot">github</a>. It requires MongoDB, has useful commands, internationalization (only EN/RU currently), sends new users welcome messages and also has a feature to assign people their roles using emote dashboard. This project was created in collaboration with my friend <a href="https://github.com/JustMrPhoenix">MrPhoenix</a>.</p><p>UPDATE 26.11.2019: Because of some breaking change in Discord API stuff in the article stopped working. There is <a href="https://github.com/discordjs/discord.js/pull/3578/files#diff-04c6e90faac2675aa89e2176d2eec7d8R38">pull request</a> that solves the problems, but we need to update to latest discord.js (master branch). I will rewrite the article when this pull request is merged. I updated the mentioned gist for now though.</p><p>UPDATE 07.01.2020 <a href="https://github.com/discordjs/discord.js/pull/3578">Pull request</a> was merged. Now you can use main branch of discord.js to make it work. I rewrote the article to comply to new API introduced in the main branch of discord.js</p><p>UPDATE 04.03.2022 <a href="https://github.com/discord/discord-api-docs/discussions/4510">Discord will shut off API versions 6 and 7</a>, so RIP discord.js v11 &amp; v12. I will update this article or write a new one to show the new ways of discord.js.</p>]]></content:encoded></item><item><title><![CDATA[First Post]]></title><description><![CDATA[<p>Well, I finally started up my own programming blog. Occasionally, when I have time &#xA0;(and use more productively than playing StarCraft II) I will write something here. For now look at this cute picture of Zeratul.</p><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2019/03/zerathul.jpg" class="kg-image" alt loading="lazy"></figure>]]></description><link>https://tonyistomin.xyz/first-post/</link><guid isPermaLink="false">61b9f04685b074000123a9c7</guid><dc:creator><![CDATA[Anton Istomin]]></dc:creator><pubDate>Tue, 19 Feb 2019 18:44:56 GMT</pubDate><content:encoded><![CDATA[<p>Well, I finally started up my own programming blog. Occasionally, when I have time &#xA0;(and use more productively than playing StarCraft II) I will write something here. For now look at this cute picture of Zeratul.</p><figure class="kg-card kg-image-card"><img src="https://tonyistomin.xyz/content/images/2019/03/zerathul.jpg" class="kg-image" alt loading="lazy"></figure>]]></content:encoded></item></channel></rss>