Jekyll2020-06-20T14:03:33+00:00https://csanchez.dev/feed.xmlCSanchezAustinMusings and experiments on Identity, DevOps, and Security.Directory Services etime redux2020-06-18T00:00:00+00:002020-06-18T00:00:00+00:00https://csanchez.dev/directory-services-etimes-redux<p>I previously wrote about <a href="/directory-services-etimes-analysis">Directory Services etime Analysis</a> where I showed how awk and jq can be powerful tools for working with <a href="https://www.forgerock.com/platform/directory-services">ForgeRock Directory Services</a> logs. In this blog post I take it a step further and improve the code to:</p>
<ul>
<li>Fully support Linux dates including millisecond precision</li>
<li>Specify a start and end time for the analysis (down to the millis)</li>
<li>Support for relative dates (see Linux: <code class="language-plaintext highlighter-rouge">info date</code>)</li>
<li>Added a feature to filter raw logs</li>
<li>Added a overall transaction summary to the report</li>
<li>Made a standalone script that can be used as-is.</li>
<li>Option to output a CSV file</li>
</ul>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Usage: opendj-ops-metrics.sh -a [ auditReport | auditCSV | auditJSON | getLogs ] [ -s <startDate> ] [ -e <endDate> ] [ -f fileList ] [ -r relativeTime ]
script parameters
-a metricAction: what to do: auditReport (table format), auditCSV (csv outformat), filterLogs (json logs). Default: auditReport
-s startDate: Date to start searching
-e endDate: Date to end searching
-f fileList: list of files. If none provided, use all audit logs
-r relativeTime: generates a startDate and endDate relative to the current time if neither startDate or endDate is specified
if startDate is specified, then the endDate is calculated relative to this parameter
if endDate is specified, then the startDate is calculated relative to this parameter
Date (YYYY-MM-DDThh:mm:ss.uuu) YYYY - year, MM - month, DD - day, hh - hour, mm - minute, ss second, uuu millis
e.g.
1. get between dates: -s 2019-12-13T15:43:04.578 -e 2019-12-14T15:43:04.578
2. get everything after: -s 2019-12-13T15:43:04.578
3. get everything before: -e 2019-12-14T15:43:04.578
4. get last 10 minutes from current time: -r "10 min ago"
Valid modifiers for past times are: -, ago, yesterday, last
5. get 10 minutes after a start time: -s 2019-12-13T15:43:04.000 -r "10 min"
6. get 10 minutes before an end time: -e 2019-12-14T15:43:04.000 -r "10 min ago"
IMPORTANT NOTE: when specifying a startDate or endDate, do NOT include the Z timezone designation suffix
</code></pre></div></div>
<p>Take note that if you are using <code class="language-plaintext highlighter-rouge">startDate</code> and/or <code class="language-plaintext highlighter-rouge">endDate</code>, you have to use a date without the trailing <code class="language-plaintext highlighter-rouge">Z</code>. This is because Linux date utility is used to generate comparable numbers used for filtering in the awk script. For example, if you pass <code class="language-plaintext highlighter-rouge">startDate</code>, the following code will generate comparable numbers.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> # relative to startTS
startTS=$(date +%Y%m%d%H%M%S%3N --date "${startDate}")
endTS=$(date +%Y%m%d%H%M%S%3N --date "${startDate} ${relativeTime}")
</code></pre></div></div>
<p>Passing the date parameter with a <code class="language-plaintext highlighter-rouge">Z</code> will cause Linux to calculate the date in the current time zone, which may not match up with the logs. Omitting the <code class="language-plaintext highlighter-rouge">Z</code> will generate a date that is aligned with the log files.</p>
<p>Here’s the new gist for the bash command I wrote that I’ll walk you through in the gist comments.
<script src="https://gist.github.com/73ceaf17e620546f32d9faa35dece344.js"> </script></p>
<p>This is an example of the report produced:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Protocol Operation Status Tx Time Median Min Max 90 95 99 StdDev
-------- --------- ---------- -------- -------- -------- ----- ----- ---- ---- ---- --------
LDAP ADD SUCCESSFUL 73892 472145 6.38966 1 325 11 17 45 12.1267
LDAP BIND FAILED 1910 7088 3.71099 0 169 4 15 76 15.0926
LDAP BIND SUCCESSFUL 11929 48188 4.03957 0 222 5 13 91 14.2503
LDAP CONNECT SUCCESSFUL 1887 0 0 0 0 0 0 0 0
LDAP DISCONNECT SUCCESSFUL 1885 0 0 0 0 0 0 0 0
LDAP EXTENDED SUCCESSFUL 260 65 0.25 0 22 1 1 3 1.47381
LDAP MODIFY FAILED 8 8 1 0 6 6 6 6 1.93649
LDAP MODIFY SUCCESSFUL 99 146 1.47475 0 24 2 3 24 2.44678
LDAP SEARCH FAILED 58530 102169 1.74558 0 169 3 7 23 6.0059
LDAP SEARCH SUCCESSFUL 580032 1345135 2.31907 0 308 3 5 57 10.626
LDAP UNBIND null 325 0 0 null null null null null 0
LDAPS BIND SUCCESSFUL 1 14 14 14 14 14 14 14 0
LDAPS CONNECT SUCCESSFUL 1 0 0 0 0 0 0 0 0
LDAPS SEARCH SUCCESSFUL 1 34 34 34 34 34 34 34 0
internal ADD SUCCESSFUL 370997 1087062 2.93011 1 330 4 5 9 5.74374
internal MODIFY SUCCESSFUL 111995 54351 0.485298 0 223 1 1 2 2.44537
-------- --------- ---------- -------- -------- -------- ----- ----- ---- ---- ---- --------
Total: 1213752 3116405 2.56758
</code></pre></div></div>
<p>Output in CSV</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Protocol,Operation,Status,Tx,Time,Median,Min,Max,90,95,99,StdDev
LDAP,ADD,SUCCESSFUL,73892,472145,6.38966,1,325,11,17,45,12.1267
LDAP,BIND,FAILED,1910,7088,3.71099,0,169,4,15,76,15.0926
LDAP,BIND,SUCCESSFUL,11929,48188,4.03957,0,222,5,13,91,14.2503
LDAP,CONNECT,SUCCESSFUL,1887,0,0,0,0,0,0,0,0
LDAP,DISCONNECT,SUCCESSFUL,1885,0,0,0,0,0,0,0,0
LDAP,EXTENDED,SUCCESSFUL,260,65,0.25,0,22,1,1,3,1.47381
LDAP,MODIFY,FAILED,8,8,1,0,6,6,6,6,1.93649
LDAP,MODIFY,SUCCESSFUL,99,146,1.47475,0,24,2,3,24,2.44678
LDAP,SEARCH,FAILED,58530,102169,1.74558,0,169,3,7,23,6.0059
LDAP,SEARCH,SUCCESSFUL,580032,1345135,2.31907,0,308,3,5,57,10.626
LDAP,UNBIND,null,325,0,0,null,null,null,null,null,0
LDAPS,BIND,SUCCESSFUL,1,14,14,14,14,14,14,14,0
LDAPS,CONNECT,SUCCESSFUL,1,0,0,0,0,0,0,0,0
LDAPS,SEARCH,SUCCESSFUL,1,34,34,34,34,34,34,34,0
internal,ADD,SUCCESSFUL,370997,1087062,2.93011,1,330,4,5,9,5.74374
internal,MODIFY,SUCCESSFUL,111995,54351,0.485298,0,223,1,1,2,2.44537
</code></pre></div></div>
<p>Output in JSON (partial)</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"Protocol": "LDAP",
"Operation": "ADD",
"Status": "SUCCESSFUL",
"Tx": 73892,
"Time": 472145,
"Median": 6.38966,
"Min": 1,
"Max": 325,
"90": 11,
"95": 17,
"99": 45,
"StdDev": 12.1267
}
{
"Protocol": "LDAP",
"Operation": "BIND",
"Status": "FAILED",
"Tx": 1910,
"Time": 7088,
"Median": 3.71099,
"Min": 0,
"Max": 169,
"90": 4,
"95": 15,
"99": 76,
"StdDev": 15.0926
}
...
</code></pre></div></div>
<p>Same disclaimer as last post - this approach is brute force and takes up a bit of system resources, so it’s not advised to run this on a production server.</p>
<p>I hope you find this useful. Feel free to submit a PR for this article if you have improvements.</p>Chris SanchezI previously wrote about Directory Services etime Analysis where I showed how awk and jq can be powerful tools for working with ForgeRock Directory Services logs. In this blog post I take it a step further and improve the code to: Fully support Linux dates including millisecond precision Specify a start and end time for the analysis (down to the millis) Support for relative dates (see Linux: info date) Added a feature to filter raw logs Added a overall transaction summary to the report Made a standalone script that can be used as-is. Option to output a CSV fileFixing Directory Services conflicted entries2020-02-21T00:00:00+00:002020-02-21T00:00:00+00:00https://csanchez.dev/manage-conflicted-entries<p>As a ForgeRock Directory Services owner/operator one has to regularly review logs to catch any number of operational problems that may surface. On problem that you may encounter are conflicted entries.</p>
<p>This happens due to conflicts during replication, meaning that a DS replica has already applied a change in it’s database when it receives the replication event to update, delete or add the same entry. Directory Services will preserve the change by creating the conflicted entry that’s distinguishable by it’s DN (which now looks like: entryuuid=entryUUID-value+original-RDN,original-parent-DN) with the addition of an operational atribute <code class="language-plaintext highlighter-rouge">ds-sync-conflict</code> . ForgeRock has a KB article detailing <a href="https://backstage.forgerock.com/knowledge/kb/article/a37856549">conflicted entries</a> and some <a href="https://backstage.forgerock.com/docs/ds/6/admin-guide/#repl-conflict">product documentation</a> that talks about the fix.</p>
<p>To find conflicted entries you can use <code class="language-plaintext highlighter-rouge">ldapsearch</code> to find entries that have the <code class="language-plaintext highlighter-rouge">ds-sync-conflict</code> attribute.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo "#### Checking for conflicts"
/opt/opendj/bin/ldapsearch \
--bindDN "cn=Directory Manager" \
--bindPassword "password" \
--hostname localhost \
--port 1389 \
--trustAll \
--baseDN "dc=zibernetics,dc=com" \
'(ds-sync-conflict=*)' 1.1
</code></pre></div></div>
<p>Generates output that looks something like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dn: entryuuid=45e10549-78ab-4205-95cb-59fdf56ee59c+uid=851b60ca-4643-4789-aef6-b792e3fe680f,ou=People,dc=zibernetics,dc=com
dn: entryuuid=2cfa5520-8486-43fe-a416-85d85474f2fc+uid=d048ba75-025a-429f-828a-a037098423e8,ou=People,dc=zibernetics,dc=com
</code></pre></div></div>
<h4 id="fixing-conflicted-entries">Fixing Conflicted Entries</h4>
<p>Of course, each entry should be carefully examined per the <a href="https://backstage.forgerock.com/docs/ds/6/admin-guide/#repl-conflict">product documentation</a> to ensure accurate resolution. The following gist assumes that all changes that need to be applied to replicas in the cluster to achieve consistency have been performed and the conflicted entries can be deleted.</p>
<p>The key to getting this to work is on line 19 <code class="language-plaintext highlighter-rouge">'(ds-sync-conflict=*)' 1.1</code> which just returns the DN and the <code class="language-plaintext highlighter-rouge">awk</code> program on the next line <code class="language-plaintext highlighter-rouge">'$1 == "dn:" { print $0; print "changetype: delete"; print ""}'</code> which captures the DN and generates a little bit of ldif to delete the entry. Finally it’s fed to <code class="language-plaintext highlighter-rouge">ldapmodify</code> to perform the changes using <code class="language-plaintext highlighter-rouge">--numConnections</code> option which is useful for opening multiple connections to the LDAP server for concurrent execution of the ldif.</p>
<script src="https://gist.github.com/63d6c2165032c759cb419e0dc5547769.js"> </script>
<p>I hope you find this useful. Feel free to submit a PR for this article if you have improvements.</p>Chris SanchezAs a ForgeRock Directory Services owner/operator one has to regularly review logs to catch any number of operational problems that may surface. On problem that you may encounter are conflicted entries.Directory Services etime Analysis2020-02-20T00:00:00+00:002020-02-20T00:00:00+00:00https://csanchez.dev/directory-services-etimes-analysis<p>I was recently preparing for an upcoming event where participants can vote for their favorite artist. My work consists of running <a href="https://jmeter.apache.org/" target="_blank">JMeter</a> load testing on <a href="https://www.blazemeter.com" target="_blank">BlazeMeter</a> to simulate high volume spikes we get during the event. Long story short, I forgot to disable <a href="https://www.splunk.com" target="_blank">Splunk</a> log forwarding during the test and started flooding our Splunk instance with audit logs. Since it’s a shared resource and has daily limits, an hour or two of load testing can impact other users, and even shutdown logging in the case of repeated incidents.</p>
<p>That presented a problem because my target for analysis was <a href="https://www.forgerock.com/platform/directory-services" target="_blank">ForgeRock Directory Services</a> (an LDAP Server) and what I was trying to evaluate were response times (or <code class="language-plaintext highlighter-rouge">etimes</code> as they’re known) for different LDAP calls. Without Splunk logs I’d be blind. Well almost blind. I still have audit logs in json format on the Directory Services environment that I can use for analysis. It’s a bit of a pain because Splunk has some really nice built-in functions for <a href="https://docs.splunk.com/Documentation/SplunkCloud/8.0.2001/Search/Aboutadvancedstatistics" target="_blank">advanced statistics</a> and can aggregate all logs in the cluster. But it would have to do.</p>
<p>Since my analysis focused on <code class="language-plaintext highlighter-rouge">etimes</code> I needed typical stats such as min, max, median, 90, 95, 99 percentiles, and standard deviation. All I had to do was calculate all the things that Splunk gave me for free. Easy, right? I decided to keep it simple and stick to tools that I already had available in my Directory Services environment - <code class="language-plaintext highlighter-rouge">bash</code>, <code class="language-plaintext highlighter-rouge">jq</code>, and <code class="language-plaintext highlighter-rouge">awk</code>. The good news is Directory Services can be configured to log audit data in json format.</p>
<p>After a session with <a href="https://www.stackoverflow.com" target="_blank">StackOverflow</a> the solution I came up with would use <code class="language-plaintext highlighter-rouge">jq</code> to extract all the data I cared about. Then I would pipe that into an <code class="language-plaintext highlighter-rouge">awk</code> program to process the data, collect statistics and, print a report. StackOverflow taught me a bit for calculating <a href="https://stackoverflow.com/questions/15101343/standard-deviation-of-an-arbitrary-number-of-numbers-using-bc-or-other-standard" target="_blank">standard deviation using awk</a>.</p>
<p>Here’s the gist for the bash command I wrote that I’ll walk you through in the comments. Also, I’d appreciate some feedback on ways to improve that <code class="language-plaintext highlighter-rouge">awk</code> command.</p>
<script src="https://gist.github.com/049670fde05c1b991c12e81821a76518.js"> </script>
<p>This is an example of the report produced:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Protocol Operation Status Tx Time Median Min Max 90 95 99 StdDev
-------- --------- ------ -------- -------- -------- ----- ----- ---- ---- ---- --------
LDAP ADD SUCCESSFUL 8037 6617 0.823317 0 2 1 1 1 0.390109
LDAP BIND SUCCESSFUL 2250 7017 3.11867 0 74 14 14 23 6.11582
LDAP CONNECT SUCCESSFUL 2250 0 0 0 0 0 0 0 0
LDAP DELETE FAILED 337 68 0.20178 0 1 1 1 1 0.401329
LDAP DELETE SUCCESSFUL 7506 6361 0.847455 0 2 1 1 1 0.374788
LDAP DISCONNECT SUCCESSFUL 2250 0 0 0 0 0 0 0 0
LDAP EXTENDED SUCCESSFUL 1800 98 0.0544444 0 1 0 1 1 0.226893
LDAP MODIFY SUCCESSFUL 5016 3028 0.603668 0 1 1 1 1 0.489135
LDAP SEARCH FAILED 6998 2101 0.300229 0 2 1 1 1 0.458669
LDAP SEARCH SUCCESSFUL 1694820 559283 0.329996 0 137 1 1 1 0.767667
LDAP UNBIND null 2250 0 0 null null 0 null null 0
LDAPS BIND SUCCESSFUL 6 83 13.8333 13 14 14 14 14 0.372678
LDAPS CONNECT SUCCESSFUL 6 0 0 0 0 0 0 0 0
LDAPS DISCONNECT SUCCESSFUL 6 0 0 0 0 0 0 0 0
LDAPS SEARCH FAILED 3 1 0.333333 0 1 1 1 1 0.471405
LDAPS SEARCH SUCCESSFUL 162 45 0.277778 0 4 1 2 3 0.650261
LDAPS UNBIND null 6 0 0 null null null null null 0
internal ADD SUCCESSFUL 40805 34854 0.85416 0 82 1 1 1 0.677344
internal DELETE SUCCESSFUL 37786 26367 0.697798 0 65 1 1 1 0.568919
internal MODIFY SUCCESSFUL 51484 23351 0.453558 0 47 1 1 1 0.538942
</code></pre></div></div>
<p>This approach is brute force and takes up a bit of system resources, so it’s not advised to run this on a production server. However, you can simply copy the logs and perform the command elsewhere.</p>
<p>I hope you find this useful. Feel free to submit a PR for this article if you have improvements.</p>Chris SanchezI was recently preparing for an upcoming event where participants can vote for their favorite artist. My work consists of running JMeter load testing on BlazeMeter to simulate high volume spikes we get during the event. Long story short, I forgot to disable Splunk log forwarding during the test and started flooding our Splunk instance with audit logs. Since it’s a shared resource and has daily limits, an hour or two of load testing can impact other users, and even shutdown logging in the case of repeated incidents.Bash shell completion for OpenAM/OpenDJ2017-05-29T00:00:00+00:002017-05-29T00:00:00+00:00https://csanchez.dev/bash-shell-completion<p>Ludovic Poitou, ForgeRock’s OpenDJ Product Manager, <a href="https://ludopoitou.com" target="_blank">blogs</a> about about OpenDJ a bit. One of his posts shows a simple technique for adding a <a href="https://ludopoitou.com/2011/06/20/opendj-tip-auto-completion-of-dsconfig-command" target="_blank">bash shell completion</a> for OpenDJ’s administrative tool, <em>dsconfig</em>. I liked it because it makes access to the CLI help for dsconfig more convenient. Since I’ve been doing a lot of work with OpenDJ and OpenAM recently, I thought I’d improve it.</p>
<p>One of my projects is called Identity Fabric. It is designed to make it easy to install distributed, production-grade deployments of Identity and Access Management Platforms such as ForgeRock’s OpenDJ and OpenAM. Using Ludo’s technique I create global startup scripts that include bash completions for OpenDJ and OpenAM.</p>
<p>Create <code class="language-plaintext highlighter-rouge">/etc/profile.d/openam.sh</code> (assuming root permissions, and you’ve already setup SSOTools)</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">OPENAM_SSOADM</span><span class="o">=</span>/openam/openam
<span class="nb">echo</span> <span class="s2">"export PATH=</span><span class="se">\$</span><span class="s2">{PATH}:</span><span class="k">${</span><span class="nv">OPENAM_SSOADM</span><span class="k">}</span><span class="s2">/bin"</span> <span class="o">></span> /etc/profile.d/openam.sh
<span class="nb">echo</span> <span class="s2">"complete -W </span><span class="se">\"</span><span class="si">$(</span><span class="k">${</span><span class="nv">OPENAM_SSOADM</span><span class="k">}</span>/bin/ssoadm <span class="nt">--help</span> 2>/dev/null | egrep <span class="s1">'^ {4}[a-z]*-[a-z].*'</span> | <span class="nb">sed</span> <span class="s1">'s/[ *]*//g'</span><span class="si">)</span><span class="se">\"</span><span class="s2"> ssoadm"</span> <span class="o">>></span> /etc/profile.d/openam.sh
<span class="nb">chmod</span> +x /etc/profile.d/openam.sh
</code></pre></div></div>
<p>And logout and back in,</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span>ssouser@castor ~]<span class="nv">$ </span>ssoadm get-<tab>
get-attr-choicevals get-auth-cfg-entr get-identity get-realm get-recording-status get-sub-cfg
get-attr-defs get-auth-instance get-identity-svcs get-realm-svc-attrs get-revision-number get-svrcfg-xml
</code></pre></div></div>
<p>Create <code class="language-plaintext highlighter-rouge">/etc/profile.d/opendj.sh</code></p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">OPENDJ_HOME_DIR</span><span class="o">=</span>/opt/opendj
<span class="nb">echo</span> <span class="s2">"export PATH=</span><span class="se">\$</span><span class="s2">{PATH}:</span><span class="k">${</span><span class="nv">OPENDJ_HOME_DIR</span><span class="k">}</span><span class="s2">/bin"</span> <span class="o">></span> /etc/profile.d/opendj.sh
<span class="c"># sets up command completion</span>
<span class="nb">echo</span> <span class="s2">"complete -W </span><span class="se">\"</span><span class="si">$(</span><span class="k">${</span><span class="nv">OPENDJ_HOME_DIR</span><span class="k">}</span>/bin/dsconfig <span class="nt">--help-all</span>|grep <span class="s1">'^[a-z].*'</span> | <span class="nb">tr</span> <span class="s1">'\n'</span> <span class="s1">' '</span><span class="si">)</span><span class="se">\"</span><span class="s2"> dsconfig"</span> <span class="o">>></span> /etc/profile.d/opendj.sh
<span class="nb">echo</span> <span class="s2">"complete -W </span><span class="se">\"</span><span class="si">$(</span><span class="k">${</span><span class="nv">OPENDJ_HOME_DIR</span><span class="k">}</span>/bin/dsreplication <span class="nt">--help</span>|grep <span class="nt">-o</span> <span class="nt">-w</span> <span class="s1">'^[a-z-]*$'</span>|grep <span class="nt">-v</span> <span class="s1">'^--'</span>|tr <span class="s1">'\n'</span> <span class="s1">' '</span><span class="si">)</span><span class="se">\"</span><span class="s2"> dsreplication"</span> <span class="o">>></span> /etc/profile.d/opendj.sh
<span class="nb">chmod</span> +x /etc/profile.d/opendj.sh
</code></pre></div></div>
<p>And same as above,</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span>ssouser@castor ~]<span class="nv">$ </span>dsconfig get-backend-<tab>
get-backend-index-prop get-backend-prop get-backend-vlv-index-prop
</code></pre></div></div>
<h4 id="quick-notes">Quick notes</h4>
<ol>
<li>I chose to use /etc/profile.d so any users that log in will get the completion setup for OpenDJ andOpenAM.</li>
<li>I took a slightly different approach than Ludo for creating the bash startup file. I resolve the completion words when creating the file (see the echo statement), rather than adding the resolve directly to the bash startup file. On some virtual machines ssoadm and dsconfig can take a long time to return (even with the help subcommand). So each login will be significantly faster.</li>
<li>Scripts in /etc/profile.d have to be executable, so remember to chmod +x /etc/profile.d/{openam.sh,opendj.sh}</li>
</ol>
<p>I hope you find this useful. Feel free to submit a PR for this article if you have improvements.</p>Chris SanchezLudovic Poitou, ForgeRock’s OpenDJ Product Manager, blogs about about OpenDJ a bit. One of his posts shows a simple technique for adding a bash shell completion for OpenDJ’s administrative tool, dsconfig. I liked it because it makes access to the CLI help for dsconfig more convenient. Since I’ve been doing a lot of work with OpenDJ and OpenAM recently, I thought I’d improve it.